DOE Office of Scientific and Technical Information (OSTI.GOV)
Penfold, S; Miller, A
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based onmore » scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.« less
Möhler, Christian; Wohlfahrt, Patrick; Richter, Christian; Greilich, Steffen
2017-06-01
Electron density is the most important tissue property influencing photon and ion dose distributions in radiotherapy patients. Dual-energy computed tomography (DECT) enables the determination of electron density by combining the information on photon attenuation obtained at two different effective x-ray energy spectra. Most algorithms suggested so far use the CT numbers provided after image reconstruction as input parameters, i.e., are imaged-based. To explore the accuracy that can be achieved with these approaches, we quantify the intrinsic methodological and calibration uncertainty of the seemingly simplest approach. In the studied approach, electron density is calculated with a one-parametric linear superposition ('alpha blending') of the two DECT images, which is shown to be equivalent to an affine relation between the photon attenuation cross sections of the two x-ray energy spectra. We propose to use the latter relation for empirical calibration of the spectrum-dependent blending parameter. For a conclusive assessment of the electron density uncertainty, we chose to isolate the purely methodological uncertainty component from CT-related effects such as noise and beam hardening. Analyzing calculated spectrally weighted attenuation coefficients, we find universal applicability of the investigated approach to arbitrary mixtures of human tissue with an upper limit of the methodological uncertainty component of 0.2%, excluding high-Z elements such as iodine. The proposed calibration procedure is bias-free and straightforward to perform using standard equipment. Testing the calibration on five published data sets, we obtain very small differences in the calibration result in spite of different experimental setups and CT protocols used. Employing a general calibration per scanner type and voltage combination is thus conceivable. Given the high suitability for clinical application of the alpha-blending approach in combination with a very small methodological uncertainty, we conclude that further refinement of image-based DECT-algorithms for electron density assessment is not advisable. © 2017 American Association of Physicists in Medicine.
Standardizing CT lung density measure across scanner manufacturers.
Chen-Mayer, Huaiyu Heather; Fuld, Matthew K; Hoppel, Bernice; Judy, Philip F; Sieren, Jered P; Guo, Junfeng; Lynch, David A; Possolo, Antonio; Fain, Sean B
2017-03-01
Computed Tomography (CT) imaging of the lung, reported in Hounsfield Units (HU), can be parameterized as a quantitative image biomarker for the diagnosis and monitoring of lung density changes due to emphysema, a type of chronic obstructive pulmonary disease (COPD). CT lung density metrics are global measurements based on lung CT number histograms, and are typically a quantity specifying either the percentage of voxels with CT numbers below a threshold, or a single CT number below which a fixed relative lung volume, nth percentile, falls. To reduce variability in the density metrics specified by CT attenuation, the Quantitative Imaging Biomarkers Alliance (QIBA) Lung Density Committee has organized efforts to conduct phantom studies in a variety of scanner models to establish a baseline for assessing the variations in patient studies that can be attributed to scanner calibration and measurement uncertainty. Data were obtained from a phantom study on CT scanners from four manufacturers with several protocols at various tube potential voltage (kVp) and exposure settings. Free from biological variation, these phantom studies provide an assessment of the accuracy and precision of the density metrics across platforms solely due to machine calibration and uncertainty of the reference materials. The phantom used in this study has three foam density references in the lung density region, which, after calibration against a suite of Standard Reference Materials (SRM) foams with certified physical density, establishes a HU-electron density relationship for each machine-protocol. We devised a 5-step calibration procedure combined with a simplified physical model that enabled the standardization of the CT numbers reported across a total of 22 scanner-protocol settings to a single energy (chosen at 80 keV). A standard deviation was calculated for overall CT numbers for each density, as well as by scanner and other variables, as a measure of the variability, before and after the standardization. In addition, a linear mixed-effects model was used to assess the heterogeneity across scanners, and the 95% confidence interval of the mean CT number was evaluated before and after the standardization. We show that after applying the standardization procedures to the phantom data, the instrumental reproducibility of the CT density measurement of the reference foams improved by more than 65%, as measured by the standard deviation of the overall mean CT number. Using the lung foam that did not participate in the calibration as a test case, a mixed effects model analysis shows that the 95% confidence intervals are [-862.0 HU, -851.3 HU] before standardization, and [-859.0 HU, -853.7 HU] after standardization to 80 keV. This is in general agreement with the expected CT number value at 80 keV of -855.9 HU with 95% CI of [-857.4 HU, -854.5 HU] based on the calibration and the uncertainty in the SRM certified density. This study provides a quantitative assessment of the variations expected in CT lung density measures attributed to non-biological sources such as scanner calibration and scanner x-ray spectrum and filtration. By removing scanner-protocol dependence from the measured CT numbers, higher accuracy and reproducibility of quantitative CT measures were attainable. The standardization procedures developed in study may be explored for possible application in CT lung density clinical data. © 2017 American Association of Physicists in Medicine.
Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.
2002-01-01
The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.
NASA Astrophysics Data System (ADS)
Lorefice, Salvatore; Malengo, Andrea
2006-10-01
After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Calibrated birth-death phylogenetic time-tree priors for bayesian inference.
Heled, Joseph; Drummond, Alexei J
2015-05-01
Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek
2006-01-11
Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen-Mayer, H; Judy, P; Fain, S
Purpose: To standardize the calibration procedures of CT lung density measurements using low-density reference foams in a phantom, and to demonstrate a reproducibility of less than 1 HU for lung equivalent foam densities measured across CT vendor platforms and protocols. Methods: A phantom study was conducted on CT scanner models from 4 vendors at 100, 120, and 135/140 kVp and 1.5, 3, and 6 mGy dose settings, using a lung density phantom containing air, water, and 3 reference foams (indirectly calibrated) with discrete densities simulating a 5-cm slice of the human chest. Customized segmentation software was used to analyze themore » images and generate a mean HU and variance for each of the density for the 22 vendor/protocols. A 3-step calibration process was devised to remove a scanner-dependent parameter using linear regression of the HU value vs the relative electron density. The results were mapped to a single energy (80 keV) for final comparison. Results: The heterogeneity across vendor platforms for each density assessed by a random effects model was reduced by 50% after re-calibration, while the standard deviation of the mean HU values also improved by about the same amount. The 95% CI of the final HU value was within +/−1 HU for all 3 reference foam densities. For the backing lung foam in the phantom (served as an “unknown”), this CI is +/− 1.6 HU. The kVp and dose settings did not appear to have significant contributions to the variability. Conclusion: With the proposed calibration procedures, the inter-scanner reproducibility of better than 1 HU is demonstrated in the current phantom study for the reference foam densities, but not yet achieved for a test density. The sources of error are being investigated in the next round of scanning with a certified Standard Reference Material for direct calibration. Fain: research funding from GE Healthcare to develop pulmonary MRI techniques. Hoppel: employee of Toshiba Medical Research Institute USA/financial interest with GE Healthcare. M. Fuld: employee of Siemens Healthcare for medical device equipment and software. This project is supported partially by RSNA QIBA Concept Award (Fain), NIH/NIBIB, HHSN268201300071C (Y).« less
NASA Astrophysics Data System (ADS)
Toledo, J.; Ruiz-Díez, V.; Pfusterschmied, G.; Schmid, U.; Sánchez-Rojas, J. L.
2017-06-01
Real-time monitoring of the physical properties of liquids, such as lubricants, is a very important issue for the automotive industry. For example, contamination of lubricating oil by diesel soot has a significant impact on engine wear. Resonant microstructures are regarded as a precise and compact solution for tracking the viscosity and density of lubricant oils. In this work, we report a piezoelectric resonator, designed to resonate with the 4th order out-of-plane modal vibration, 15-mode, and the interface circuit and calibration process for the monitoring of oil dilution with diesel fuel. In order to determine the resonance parameters of interest, i.e. resonant frequency and quality factor, an interface circuit was implemented and included within a closed-loop scheme. Two types of oscillator circuits were tested, a Phase-Locked Loop based on instrumentation, and a more compact version based on discrete electronics, showing similar resolution. Another objective of this work is the assessment of a calibration method for piezoelectric MEMS resonators in simultaneous density and viscosity sensing. An advanced calibration model, based on a Taylor series of the hydrodynamic function, was established as a suitable method for determining the density and viscosity with the lowest calibration error. Our results demonstrate the performance of the resonator in different oil samples with viscosities up to 90 mPa•s. At the highest value, the quality factor measured at 25°C was around 22. The best resolution obtained was 2.4•10-6 g/ml for the density and 2.7•10-3 mPa•s for the viscosity, in pure lubricant oil SAE 0W30 at 90°C. Furthermore, the estimated density and viscosity values with the MEMS resonator were compared to those obtained with a commercial density-viscosity meter, reaching a mean calibration error in the best scenario of around 0.08% for the density and 3.8% for the viscosity.
Application of Polychromatic µCT for Mineral Density Determination
Zou, W.; Hunter, N.; Swain, M.V.
2011-01-01
Accurate assessment of mineral density (MD) provides information critical to the understanding of mineralization processes of calcified tissues, including bones and teeth. High-resolution three-dimensional assessment of the MD of teeth has been demonstrated by relatively inaccessible synchrotron radiation microcomputed tomography (SRµCT). While conventional desktop µCT (CµCT) technology is widely available, polychromatic source and cone-shaped beam geometry confound MD assessment. Recently, considerable attention has been given to optimizing quantitative data from CµCT systems with polychromatic x-ray sources. In this review, we focus on the approaches that minimize inaccuracies arising from beam hardening, in particular, beam filtration during the scan, beam-hardening correction during reconstruction, and mineral density calibration. Filtration along with lowest possible source voltage results in a narrow and near-single-peak spectrum, favoring high contrast and minimal beam-hardening artifacts. More effective beam monochromatization approaches are described. We also examine the significance of beam-hardening correction in determining the accuracy of mineral density estimation. In addition, standards for the calibration of reconstructed grey-scale attenuation values against MD, including K2PHO4 liquid phantom, and polymer-hydroxyapatite (HA) and solid hydroxyapatite (HA) phantoms, are discussed. PMID:20858779
Ellison, Aaron M.; Jackson, Scott
2015-01-01
Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Baena-Díez, José Miguel; Subirana, Isaac; Ramos, Rafael; Gómez de la Cámara, Agustín; Elosua, Roberto; Vila, Joan; Marín-Ibáñez, Alejandro; Guembe, María Jesús; Rigo, Fernando; Tormo-Díaz, María José; Moreno-Iribas, Conchi; Cabré, Joan Josep; Segura, Antonio; Lapetra, José; Quesada, Miquel; Medrano, María José; González-Diego, Paulino; Frontera, Guillem; Gavrila, Diana; Ardanaz, Eva; Basora, Josep; García, José María; García-Lareo, Manel; Gutiérrez-Fuentes, José Antonio; Mayoral, Eduardo; Sala, Joan; Dégano, Irene R; Francès, Albert; Castell, Conxa; Grau, María; Marrugat, Jaume
2018-04-01
To assess the validity of the original low-risk SCORE function without and with high-density lipoprotein cholesterol and SCORE calibrated to the Spanish population. Pooled analysis with individual data from 12 Spanish population-based cohort studies. We included 30 919 individuals aged 40 to 64 years with no history of cardiovascular disease at baseline, who were followed up for 10 years for the causes of death included in the SCORE project. The validity of the risk functions was analyzed with the area under the ROC curve (discrimination) and the Hosmer-Lemeshow test (calibration), respectively. Follow-up comprised 286 105 persons/y. Ten-year cardiovascular mortality was 0.6%. The ratio between estimated/observed cases ranged from 9.1, 6.5, and 9.1 in men and 3.3, 1.3, and 1.9 in women with original low-risk SCORE risk function without and with high-density lipoprotein cholesterol and calibrated SCORE, respectively; differences were statistically significant with the Hosmer-Lemeshow test between predicted and observed mortality with SCORE (P < .001 in both sexes and with all functions). The area under the ROC curve with the original SCORE was 0.68 in men and 0.69 in women. All versions of the SCORE functions available in Spain significantly overestimate the cardiovascular mortality observed in the Spanish population. Despite the acceptable discrimination capacity, prediction of the number of fatal cardiovascular events (calibration) was significantly inaccurate. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Calibrations of the LHD Thomson scattering system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, I., E-mail: yamadai@nifs.ac.jp; Funaba, H.; Yasuhara, R.
2016-11-15
The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.
Calibrations of the LHD Thomson scattering system.
Yamada, I; Funaba, H; Yasuhara, R; Hayashi, H; Kenmochi, N; Minami, T; Yoshikawa, M; Ohta, K; Lee, J H; Lee, S H
2016-11-01
The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.
2004-01-01
The International Society for Clinical Densitometry (ISCD) held a Position Development Conference in July 2003, at which time positions developed and researched by the organization's Scientific Advisory Committee were presented to a panel of international experts in the field of bone density testing. This panel reached agreement on a series of positions that were subsequently approved by the Board of Directors of the ISCD and are now official policy of the ISCD. These positions, which are outlined in this article and discussed in greater detail in subsequent articles in this journal, include (1) affirmation of the use of the World Health Organization classification for the diagnosis of osteoporosis in postmenopausal women; (2) the diagnosis of osteoporosis in men; (3) the diagnosis of osteoporosis in premenopausal women; (4) the diagnosis of osteoporosis in children; (5) technical standards for skeletal regions of interest by dual-energy X-ray absorptiometry (DXA); (6) the use of new technologies, such as vertebral fracture assessment; (7) technical standards for quality assurance, including phantom scanning and calibration; (8) technical standards for the performance of precision assessment at bone density testing centers, and for cross-calibration of DXA devices; (9) indications for bone density testing; (10) appropriate information for a bone density report; and (11) nomenclature and decimal places for bone density reporting.
Calibrating ion density profile measurements in ion thruster beam plasma
NASA Astrophysics Data System (ADS)
Zhang, Zun; Tang, Haibin; Ren, Junxue; Zhang, Zhe; Wang, Joseph
2016-11-01
The ion thruster beam plasma is characterized by high directed ion velocity (104 m/s) and low plasma density (1015 m-3). Interpretation of measurements of such a plasma based on classical Langmuir probe theory can yield a large experimental error. This paper presents an indirect method to calibrate ion density determination in an ion thruster beam plasma using a Faraday probe, a retarding potential analyzer, and a Langmuir probe. This new method is applied to determine the plasma emitted from a 20-cm-diameter Kaufman ion thruster. The results show that the ion density calibrated by the new method can be as much as 40% less than that without any ion current density and ion velocity calibration.
Dos Reis, Mario
2016-07-19
Constructing a multi-dimensional prior on the times of divergence (the node ages) of species in a phylogeny is not a trivial task, in particular, if the prior density is the result of combining different sources of information such as a speciation process with fossil calibration densities. Yang & Rannala (2006 Mol. Biol. Evol 23, 212-226. (doi:10.1093/molbev/msj024)) laid out the general approach to combine the birth-death process with arbitrary fossil-based densities to construct a prior on divergence times. They achieved this by calculating the density of node ages without calibrations conditioned on the ages of the calibrated nodes. Here, I show that the conditional density obtained by Yang & Rannala is misspecified. The misspecified density can sometimes be quite strange-looking and can lead to unintentionally informative priors on node ages without fossil calibrations. I derive the correct density and provide a few illustrative examples. Calculation of the density involves a sum over a large set of labelled histories, and so obtaining the density in a computer program seems hard at the moment. A general algorithm that may provide a way forward is given.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).
Estimation of option-implied risk-neutral into real-world density by using calibration function
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-04-01
Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.
Note: Calibration of EBT3 radiochromic film for measuring solar ultraviolet radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, S. L.; Yu, P. K. N., E-mail: peter.yu@cityu.edu.hk; State Key Laboratory in Marine Pollution, City University of Hong Kong, Kowloon Tong
Solar (UVA + UVB) exposure was assessed using the Gafchromic EBT3 film. The coloration change was represented by the net reflective optical density (Net ROD). Through calibrations against a UV-tube lamp, operational relationships were obtained between Net ROD and the (UVA + UVB) exposures (in J cm⁻²p or J m⁻²). The useful range was from ~0.2 to ~30 J cm⁻². The uniformity of UV irradiation was crucial for an accurate calibration. For solar exposures ranging from 2 to 11 J cm⁻², the predicted Net ROD agreed with the recorded values within 9%, while the predicted exposures agreed with the recordedmore » values within 15%.« less
Study of glass hydrometer calibration by hydrostatic weighting
NASA Astrophysics Data System (ADS)
Chen, Chaoyun; Wang, Jintao; Li, Zhihao; Zhang, Peiman
2016-01-01
Glass hydrometers are simple but effective instruments for measuring the density of liquids. Glass hydrometers calibration based on the Archimedes law, using silicon ring as a reference standard solid density, n-tridecane with density stability and low surface tension as the standard working liquid, based on hydrostatic weighing method designs a glass hydrometer calibration system. Glass hydrometer calibration system uses CCD image measurement system to align the scale of hydrometer and liquid surface, with positioning accuracy of 0.01 mm. Surface tension of the working liquid is measured by Whihemy plate. According to twice glass hydrometer weighing in the air and liquid can calculate the correction value of the current scale. In order to verify the validity of the principle of the hydrostatic weighing method of glass hydrometer calibration system, for measuring the density range of (770-790) kg/m3, with a resolution of 0.2 kg/m3 of hydrometer. The results of measurement compare with the Physikalisch-Technische Bundesanstalt(PTB) ,verifying the validity of the calibration system.
Whelan, Jessica; Craven, Stephen; Glennon, Brian
2012-01-01
In this study, the application of Raman spectroscopy to the simultaneous quantitative determination of glucose, glutamine, lactate, ammonia, glutamate, total cell density (TCD), and viable cell density (VCD) in a CHO fed-batch process was demonstrated in situ in 3 L and 15 L bioreactors. Spectral preprocessing and partial least squares (PLS) regression were used to correlate spectral data with off-line reference data. Separate PLS calibration models were developed for each analyte at the 3 L laboratory bioreactor scale before assessing its transferability to the same bioprocess conducted at the 15 L pilot scale. PLS calibration models were successfully developed for all analytes bar VCD and transferred to the 15 L scale. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
A universal airborne LiDAR approach for tropical forest carbon mapping.
Asner, Gregory P; Mascaro, Joseph; Muller-Landau, Helene C; Vieilledent, Ghislain; Vaudry, Romuald; Rasamoelina, Maminiaina; Hall, Jefferson S; van Breugel, Michiel
2012-04-01
Airborne light detection and ranging (LiDAR) is fast turning the corner from demonstration technology to a key tool for assessing carbon stocks in tropical forests. With its ability to penetrate tropical forest canopies and detect three-dimensional forest structure, LiDAR may prove to be a major component of international strategies to measure and account for carbon emissions from and uptake by tropical forests. To date, however, basic ecological information such as height-diameter allometry and stand-level wood density have not been mechanistically incorporated into methods for mapping forest carbon at regional and global scales. A better incorporation of these structural patterns in forests may reduce the considerable time needed to calibrate airborne data with ground-based forest inventory plots, which presently necessitate exhaustive measurements of tree diameters and heights, as well as tree identifications for wood density estimation. Here, we develop a new approach that can facilitate rapid LiDAR calibration with minimal field data. Throughout four tropical regions (Panama, Peru, Madagascar, and Hawaii), we were able to predict aboveground carbon density estimated in field inventory plots using a single universal LiDAR model (r ( 2 ) = 0.80, RMSE = 27.6 Mg C ha(-1)). This model is comparable in predictive power to locally calibrated models, but relies on limited inputs of basal area and wood density information for a given region, rather than on traditional plot inventories. With this approach, we propose to radically decrease the time required to calibrate airborne LiDAR data and thus increase the output of high-resolution carbon maps, supporting tropical forest conservation and climate mitigation policy.
NASA Astrophysics Data System (ADS)
Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini
2014-03-01
Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.
Water Use Patterns of Four Tropical Bamboo Species Assessed with Sap Flux Measurements.
Mei, Tingting; Fang, Dongming; Röll, Alexander; Niu, Furong; Hendrayanto; Hölscher, Dirk
2015-01-01
Bamboos are grasses (Poaceae) that are widespread in tropical and subtropical regions. We aimed at exploring water use patterns of four tropical bamboo species (Bambusa vulgaris, Dendrocalamus asper, Gigantochloa atroviolacea, and G. apus) with sap flux measurement techniques. Our approach included three experimental steps: (1) a pot experiment with a comparison of thermal dissipation probes (TDPs), the stem heat balance (SHB) method and gravimetric readings using potted B. vulgaris culms, (2) an in situ calibration of TDPs with the SHB method for the four bamboo species, and (3) field monitoring of sap flux of the four bamboo species along with three tropical tree species (Gmelina arborea, Shorea leprosula, and Hevea brasiliensis) during a dry and a wet period. In the pot experiment, it was confirmed that the SHB method is well suited for bamboos but that TDPs need to be calibrated. In situ, species-specific parameters for such calibration formulas were derived. During field monitoring we found that some bamboo species reached high maximum sap flux densities. Across bamboo species, maximal sap flux density increased with decreasing culm diameter. In the diurnal course, sap flux densities in bamboos peaked much earlier than radiation and vapor pressure deficit (VPD), and also much earlier than sap flux densities in trees. There was a pronounced hysteresis between sap flux density and VPD in bamboos, which was less pronounced in trees. Three of the four bamboo species showed reduced sap flux densities at high VPD values during the dry period, which was associated with a decrease in soil moisture content. Possible roles of internal water storage, root pressure and stomatal sensitivity are discussed.
Water Use Patterns of Four Tropical Bamboo Species Assessed with Sap Flux Measurements
Mei, Tingting; Fang, Dongming; Röll, Alexander; Niu, Furong; Hendrayanto; Hölscher, Dirk
2016-01-01
Bamboos are grasses (Poaceae) that are widespread in tropical and subtropical regions. We aimed at exploring water use patterns of four tropical bamboo species (Bambusa vulgaris, Dendrocalamus asper, Gigantochloa atroviolacea, and G. apus) with sap flux measurement techniques. Our approach included three experimental steps: (1) a pot experiment with a comparison of thermal dissipation probes (TDPs), the stem heat balance (SHB) method and gravimetric readings using potted B. vulgaris culms, (2) an in situ calibration of TDPs with the SHB method for the four bamboo species, and (3) field monitoring of sap flux of the four bamboo species along with three tropical tree species (Gmelina arborea, Shorea leprosula, and Hevea brasiliensis) during a dry and a wet period. In the pot experiment, it was confirmed that the SHB method is well suited for bamboos but that TDPs need to be calibrated. In situ, species-specific parameters for such calibration formulas were derived. During field monitoring we found that some bamboo species reached high maximum sap flux densities. Across bamboo species, maximal sap flux density increased with decreasing culm diameter. In the diurnal course, sap flux densities in bamboos peaked much earlier than radiation and vapor pressure deficit (VPD), and also much earlier than sap flux densities in trees. There was a pronounced hysteresis between sap flux density and VPD in bamboos, which was less pronounced in trees. Three of the four bamboo species showed reduced sap flux densities at high VPD values during the dry period, which was associated with a decrease in soil moisture content. Possible roles of internal water storage, root pressure and stomatal sensitivity are discussed. PMID:26779233
ERIC Educational Resources Information Center
Peterson, Karen I.
2008-01-01
The experiment developed in this article addresses the concept of equipment calibration for reducing systematic error. It also suggests simple student-prepared sucrose solutions for which accurate densities are known, but not readily available to students. Densities are measured with simple glassware that has been calibrated using the density of…
Tornero-López, Ana M; Guirado, Damián; Perez-Calatayud, Jose; Ruiz-Arrebola, Samuel; Simancas, Fernando; Gazdic-Santic, Maja; Lallena, Antonio M
2013-12-01
Air-communicating well ionization chambers are commonly used to assess air kerma strength of sources used in brachytherapy. The signal produced is supposed to be proportional to the air density within the chamber and, therefore, a density-independent air kerma strength is obtained when the measurement is corrected to standard atmospheric conditions using the usual temperature and pressure correction factor. Nevertheless, when assessing low energy sources, the ionization chambers may not fulfill that condition and a residual density dependence still remains after correction. In this work, the authors examined the behavior of the PTW 34051 SourceCheck ionization chamber when measuring the air kerma strength of (125)I seeds. Four different SourceCheck chambers were analyzed. With each one of them, two series of measurements of the air kerma strength for (125)I selectSeed(TM) brachytherapy sources were performed inside a pressure chamber and varying the pressure in a range from 747 to 1040 hPa (560 to 780 mm Hg). The temperature and relative humidity were kept basically constant. An analogous experiment was performed by taking measurements at different altitudes above sea level. Contrary to other well-known ionization chambers, like the HDR1000 PLUS, in which the temperature-pressure correction factor overcorrects the measurements, in the SourceCheck ionization chamber they are undercorrected. At a typical atmospheric situation of 933 hPa (700 mm Hg) and 20 °C, this undercorrection turns out to be 1.5%. Corrected measurements show a residual linear dependence on the density and, as a consequence, an additional density dependent correction must be applied. The slope of this residual linear density dependence is different for each SourceCheck chamber investigated. The results obtained by taking measurements at different altitudes are compatible with those obtained with the pressure chamber. Variations of the altitude and changes in the weather conditions may produce significant density corrections, and that effect should be taken into account. This effect is chamber-dependent, indicating that a specific calibration is necessary for each particular chamber. To our knowledge, this correction has not been considered so far for SourceCheck ionization chambers, but its magnitude cannot be neglected in clinical practice. The atmospheric pressure and temperature at which the chamber was calibrated need to be taken into account, and they should be reported in the calibration certificate. In addition, each institution should analyze the particular response of its SourceCheck ionization chamber and compute the adequate correction factors. In the absence of a suitable pressure chamber, a possibility for this assessment is to take measurements at different altitudes, spanning a wide enough air density range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tornero-López, Ana M.; Guirado, Damián; Ruiz-Arrebola, Samuel
2013-12-15
Purpose: Air-communicating well ionization chambers are commonly used to assess air kerma strength of sources used in brachytherapy. The signal produced is supposed to be proportional to the air density within the chamber and, therefore, a density-independent air kerma strength is obtained when the measurement is corrected to standard atmospheric conditions using the usual temperature and pressure correction factor. Nevertheless, when assessing low energy sources, the ionization chambers may not fulfill that condition and a residual density dependence still remains after correction. In this work, the authors examined the behavior of the PTW 34051 SourceCheck ionization chamber when measuring themore » air kerma strength of {sup 125}I seeds.Methods: Four different SourceCheck chambers were analyzed. With each one of them, two series of measurements of the air kerma strength for {sup 125}I selectSeed{sup TM} brachytherapy sources were performed inside a pressure chamber and varying the pressure in a range from 747 to 1040 hPa (560 to 780 mm Hg). The temperature and relative humidity were kept basically constant. An analogous experiment was performed by taking measurements at different altitudes above sea level.Results: Contrary to other well-known ionization chambers, like the HDR1000 PLUS, in which the temperature-pressure correction factor overcorrects the measurements, in the SourceCheck ionization chamber they are undercorrected. At a typical atmospheric situation of 933 hPa (700 mm Hg) and 20 °C, this undercorrection turns out to be 1.5%. Corrected measurements show a residual linear dependence on the density and, as a consequence, an additional density dependent correction must be applied. The slope of this residual linear density dependence is different for each SourceCheck chamber investigated. The results obtained by taking measurements at different altitudes are compatible with those obtained with the pressure chamber.Conclusions: Variations of the altitude and changes in the weather conditions may produce significant density corrections, and that effect should be taken into account. This effect is chamber-dependent, indicating that a specific calibration is necessary for each particular chamber. To our knowledge, this correction has not been considered so far for SourceCheck ionization chambers, but its magnitude cannot be neglected in clinical practice. The atmospheric pressure and temperature at which the chamber was calibrated need to be taken into account, and they should be reported in the calibration certificate. In addition, each institution should analyze the particular response of its SourceCheck ionization chamber and compute the adequate correction factors. In the absence of a suitable pressure chamber, a possibility for this assessment is to take measurements at different altitudes, spanning a wide enough air density range.« less
NASA Technical Reports Server (NTRS)
Leger, Lubert J.; Koontz, Steven L.; Visentine, James T.; Hunton, Donald
1993-01-01
An overview of EOIM-III, designed to produce benchmark atomic oxygen reactivity data is presented. Ambient density measurements are conducted using a quadrupole mass spectrometer calibrated for atomic oxygen measurements in a unique ground-based test facility. The combination of these data with the predictions of ambient density models permits an assessment of the accuracy of measured reaction rates on a variety of materials, many of which have never been tested in LEO previously.
Viscosity and density of methanol/water mixtures at low temperatures
NASA Technical Reports Server (NTRS)
Austin, J. G.; Kurata, F.; Swift, G. W.
1968-01-01
Viscosity and density are measured at low temperatures for three methanol/water mixtures. Viscosity is determined by a modified falling cylinder method or a calibrated viscometer. Density is determined by the volume of each mixture contained in a calibrated glass cell placed in a constant-temperature bath.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Wei; Balkovic, Juraj; van der Velde, M.
Crop models are increasingly used to assess impacts of climate change/variability and management practices on productivity and environmental performance of alternative cropping systems. Calibration is an important procedure to improve reliability of model simulations, especially for large area applications. However, global-scale crop model calibration has rarely been exercised due to limited data availability and expensive computing cost. Here we present a simple approach to calibrate Environmental Policy Integrated Climate (EPIC) model for a global implementation of rice. We identify four parameters (potential heat unit – PHU, planting density – PD, harvest index – HI, and biomass energy ratio – BER)more » and calibrate them regionally to capture the spatial pattern of reported rice yield in 2000. Model performance is assessed by comparing simulated outputs with independent FAO national data. The comparison demonstrates that the global calibration scheme performs satisfactorily in reproducing the spatial pattern of rice yield, particularly in main rice production areas. Spatial agreement increases substantially when more parameters are selected and calibrated, but with varying efficiencies. Among the parameters, PHU and HI exhibit the highest efficiencies in increasing the spatial agreement. Simulations with different calibration strategies generate a pronounced discrepancy of 5–35% in mean yields across latitude bands, and a small to moderate difference in estimated yield variability and yield changing trend for the period of 1981–2000. Present calibration has little effects in improving simulated yield variability and trends at both regional and global levels, suggesting further works are needed to reproduce temporal variability of reported yields. This study highlights the importance of crop models’ calibration, and presents the possibility of a transparent and consistent up scaling approach for global crop simulations given current availability of global databases of weather, soil, crop calendar, fertilizer and irrigation management information, and reported yield.« less
The role of adequate reference materials in density measurements in hemodialysis
NASA Astrophysics Data System (ADS)
Furtado, A.; Moutinho, J.; Moura, S.; Oliveira, F.; Filipe, E.
2015-02-01
In hemodialysis, oscillation-type density meters are used to measure the density of the acid component of the dialysate solutions used in the treatment of kidney patients. An incorrect density determination of this solution used in hemodialysis treatments can cause several and adverse events to patients. Therefore, despite the Fresenius Medical Care (FME) tight control of the density meters calibration results, this study shows the benefits of mimic the matrix usually measured to produce suitable reference materials for the density meter calibrations.
A fast, calibrated model for pyroclastic density currents kinematics and hazard
NASA Astrophysics Data System (ADS)
Esposti Ongaro, Tomaso; Orsucci, Simone; Cornolti, Fulvio
2016-11-01
Multiphase flow models represent valuable tools for the study of the complex, non-equilibrium dynamics of pyroclastic density currents. Particle sedimentation, flow stratification and rheological changes, depending on the flow regime, interaction with topographic obstacles, turbulent air entrainment, buoyancy reversal, and other complex features of pyroclastic currents can be simulated in two and three dimensions, by exploiting efficient numerical solvers and the improved computational capability of modern supercomputers. However, numerical simulations of polydisperse gas-particle mixtures are quite computationally expensive, so that their use in hazard assessment studies (where there is the need of evaluating the probability of hazardous actions over hundreds of possible scenarios) is still challenging. To this aim, a simplified integral (box) model can be used, under the appropriate hypotheses, to describe the kinematics of pyroclastic density currents over a flat topography, their scaling properties and their depositional features. In this work, multiphase flow simulations are used to evaluate integral model approximations, to calibrate its free parameters and to assess the influence of the input data on the results. Two-dimensional numerical simulations describe the generation and decoupling of a dense, basal layer (formed by progressive particle sedimentation) from the dilute transport system. In the Boussinesq regime (i.e., for solid mass fractions below about 0.1), the current Froude number (i.e., the ratio between the current inertia and buoyancy) does not strongly depend on initial conditions and it is consistent to that measured in laboratory experiments (i.e., between 1.05 and 1.2). For higher density ratios (solid mass fraction in the range 0.1-0.9) but still in a relatively dilute regime (particle volume fraction lower than 0.01), numerical simulations demonstrate that the box model is still applicable, but the Froude number depends on the reduced gravity. When the box model is opportunely calibrated with the numerical simulation results, the prediction of the flow runout is fairly accurate and the model predicts a rapid, non-linear decay of the flow kinetic energy (or dynamic pressure) with the distance from the source. The capability of PDC to overcome topographic obstacles can thus be analysed in the framework of the energy-conoid approach, in which the predicted kinetic energy of the flow front is compared with the potential energy jump associated with the elevated topography to derive a condition for blocking. Model results show that, although preferable to the energy-cone, the energy-conoid approach still has some serious limitations, mostly associated with the behaviour of the flow head. Implications of these outcomes are discussed in the context of probabilistic hazard assessment studies, in which a calibrated box model can be used as a fast pyroclastic density current emulator for Monte Carlo simulations.
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density of air, and the density of the calibration weight used to calibrate the balance. The buoyancy...
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
Wang, Jeff; Kato, Fumi; Yamashita, Hiroko; Baba, Motoi; Cui, Yi; Li, Ruijiang; Oyama-Manabe, Noriko; Shirato, Hiroki
2017-04-01
Breast cancer is the most common invasive cancer among women and its incidence is increasing. Risk assessment is valuable and recent methods are incorporating novel biomarkers such as mammographic density. Artificial neural networks (ANN) are adaptive algorithms capable of performing pattern-to-pattern learning and are well suited for medical applications. They are potentially useful for calibrating full-field digital mammography (FFDM) for quantitative analysis. This study uses ANN modeling to estimate volumetric breast density (VBD) from FFDM on Japanese women with and without breast cancer. ANN calibration of VBD was performed using phantom data for one FFDM system. Mammograms of 46 Japanese women diagnosed with invasive carcinoma and 53 with negative findings were analyzed using ANN models learned. ANN-estimated VBD was validated against phantom data, compared intra-patient, with qualitative composition scoring, with MRI VBD, and inter-patient with classical risk factors of breast cancer as well as cancer status. Phantom validations reached an R 2 of 0.993. Intra-patient validations ranged from R 2 of 0.789 with VBD to 0.908 with breast volume. ANN VBD agreed well with BI-RADS scoring and MRI VBD with R 2 ranging from 0.665 with VBD to 0.852 with breast volume. VBD was significantly higher in women with cancer. Associations with age, BMI, menopause, and cancer status previously reported were also confirmed. ANN modeling appears to produce reasonable measures of mammographic density validated with phantoms, with existing measures of breast density, and with classical biomarkers of breast cancer. FFDM VBD is significantly higher in Japanese women with cancer.
WFIRST: Predicting the number density of Hα-emitting galaxies
NASA Astrophysics Data System (ADS)
Benson, Andrew; Merson, Alex; Wang, Yun; Faisst, Andreas; Masters, Daniel; Kiessling, Alina; Rhodes, Jason
2018-01-01
The WFIRST mission will measure the clustering of Hα-emitting galaxies to help probe the nature of dark energy. Knowledge of the number density of such galaxies is therefore vital for forecasting the precision of thesemeasurements and assessing the scientific impact of the WFIRST mission. In this poster we present predictions from a galaxy formation model, Galacticus, for the cumulative number counts of Hα-emitting galaxies. We couple Galacticus to three different dust attenuation methods and examine the counts using each method. A χ2 minimization approach is used to compare the model counts to observed galaxy counts and calibrate the dust parameters. With these calibrated dust methods, we find that the Hα luminosity function from Galacticus is broadly consistent with observed estimates. Finally we present forecasts for the redshift distributions and number counts for a WFIRST-like survey. We predict that over a redshift range of 1 ≤ z ≤ 2 and with a blended flux limit of 1×10-16 erg s-1cm-2 Galacticus predicts that WFIRST would expect to observe a number density between 10400-15200 Hα-emitting galaxies per square degree.
Thielens, Arno; Agneessens, Sam; Van Torre, Patrick; Van den Bossche, Matthias; Eeftens, Marloes; Huss, Anke; Vermeulen, Roel; de Seze, René; Mazet, Paul; Cardis, Elisabeth; Röösli, Martin; Martens, Luc; Joseph, Wout
2018-01-01
A multi-band Body-Worn Distributed exposure Meter (BWDM) calibrated for simultaneous measurement of the incident power density in 11 telecommunication frequency bands, is proposed. The BDWM consists of 22 textile antennas integrated in a garment and is calibrated on six human subjects in an anechoic chamber to assess its measurement uncertainty in terms of 68% confidence interval of the on-body antenna aperture. It is shown that by using multiple antennas in each frequency band, the uncertainty of the BWDM is 22 dB improved with respect to single nodes on the front and back of the torso and variations are decreased to maximum 8.8 dB. Moreover, deploying single antennas for different body morphologies results in a variation up to 9.3 dB, which is reduced to 3.6 dB using multiple antennas for six subjects with various body mass index values. The designed BWDM, has an improved uncertainty of up to 9.6 dB in comparison to commercially available personal exposure meters calibrated on body. As an application, an average incident power density in the range of 26.7–90.8 μW·m−2 is measured in Ghent, Belgium. The measurements show that commercial personal exposure meters underestimate the actual exposure by a factor of up to 20.6. PMID:29346280
NASA Astrophysics Data System (ADS)
Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus
2018-05-01
Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
LIF Density Measurement Calibration Using a Reference Cell
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Williams, George J., Jr.; Lyons, Valerie J. (Technical Monitor)
2002-01-01
Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. A pulsed UV laser was used to pump the ground state molybdenum at 345.64nm, and the non-resonant fluorescence at 550-nm was collected using a bandpass filter and a photomultiplier tube or intensified CCD array. The sensitivity of the fluorescence was evaluated to determine the limitations of the calibration technique. The suitability of the diagnostic calibration technique was assessed for application to ion engine erosion rate measurements.
MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples
DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rong Yi, E-mail: rong@humonc.wisc.ed; Smilowitz, Jennifer; Tewatia, Dinesh
2010-10-01
Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulatemore » sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific calibration is applied.« less
Hydrometer calibration by hydrostatic weighing with automated liquid surface positioning
NASA Astrophysics Data System (ADS)
Aguilera, Jesus; Wright, John D.; Bean, Vern E.
2008-01-01
We describe an automated apparatus for calibrating hydrometers by hydrostatic weighing (Cuckow's method) in tridecane, a liquid of known, stable density, and with a relatively low surface tension and contact angle against glass. The apparatus uses a laser light sheet and a laser power meter to position the tridecane surface at the hydrometer scale mark to be calibrated with an uncertainty of 0.08 mm. The calibration results have an expanded uncertainty (with a coverage factor of 2) of 100 parts in 106 or less of the liquid density. We validated the apparatus by comparisons using water, toluene, tridecane and trichloroethylene, and found agreement within 40 parts in 106 or less. The new calibration method is consistent with earlier, manual calibrations performed by NIST. When customers use calibrated hydrometers, they may encounter uncertainties of 370 parts in 106 or larger due to surface tension, contact angle and temperature effects.
Calibrated tree priors for relaxed phylogenetics and divergence time estimation.
Heled, Joseph; Drummond, Alexei J
2012-01-01
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.
Computerized tomography calibrator
NASA Technical Reports Server (NTRS)
Engel, Herbert P. (Inventor)
1991-01-01
A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.
NASA Technical Reports Server (NTRS)
Craft, D. William
1992-01-01
A facility for the precise calibration of mass fuel flowmeters and turbine flowmeters located at AMETEK Aerospace Products Inc., Wilmington, Massachusetts is described. This facility is referred to as the Test and Calibration System (TACS). It is believed to be the most accurate test facility available for the calibration of jet engine fuel density measurement. The product of the volumetric flow rate measurement and the density measurement, results in a true mass flow rate determination. A dual-turbine flowmeter was designed during this program. The dual-turbine flowmeter was calibrated on the TACS to show the characteristics of this type of flowmeter. An angular momentum flowmeter was also calibrated on the TACS to demonstrate the accuracy of a true mass flowmeter having a 'state-of-the-art' design accuracy.
The use of megavoltage CT (MVCT) images for dose recomputations
NASA Astrophysics Data System (ADS)
Langen, K. M.; Meeks, S. L.; Poole, D. O.; Wagner, T. H.; Willoughby, T. R.; Kupelian, P. A.; Ruchala, K. J.; Haimerl, J.; Olivera, G. H.
2005-09-01
Megavoltage CT (MVCT) images of patients are acquired daily on a helical tomotherapy unit (TomoTherapy, Inc., Madison, WI). While these images are used primarily for patient alignment, they can also be used to recalculate the treatment plan for the patient anatomy of the day. The use of MVCT images for dose computations requires a reliable CT number to electron density calibration curve. In this work, we tested the stability of the MVCT numbers by determining the variation of this calibration with spatial arrangement of the phantom, time and MVCT acquisition parameters. The two calibration curves that represent the largest variations were applied to six clinical MVCT images for recalculations to test for dosimetric uncertainties. Among the six cases tested, the largest difference in any of the dosimetric endpoints was 3.1% but more typically the dosimetric endpoints varied by less than 2%. Using an average CT to electron density calibration and a thorax phantom, a series of end-to-end tests were run. Using a rigid phantom, recalculated dose volume histograms (DVHs) were compared with plan DVHs. Using a deformed phantom, recalculated point dose variations were compared with measurements. The MVCT field of view is limited and the image space outside this field of view can be filled in with information from the planning kVCT. This merging technique was tested for a rigid phantom. Finally, the influence of the MVCT slice thickness on the dose recalculation was investigated. The dosimetric differences observed in all phantom tests were within the range of dosimetric uncertainties observed due to variations in the calibration curve. The use of MVCT images allows the assessment of daily dose distributions with an accuracy that is similar to that of the initial kVCT dose calculation.
Assessing cetacean surveys throughout the Mediterranean Sea: a gap analysis in environmental space.
Mannocci, Laura; Roberts, Jason J; Halpin, Patrick N; Authier, Matthieu; Boisseau, Oliver; Bradai, Mohamed Nejmeddine; Cañadas, Ana; Chicote, Carla; David, Léa; Di-Méglio, Nathalie; Fortuna, Caterina M; Frantzis, Alexandros; Gazo, Manel; Genov, Tilen; Hammond, Philip S; Holcer, Draško; Kaschner, Kristin; Kerem, Dani; Lauriano, Giancarlo; Lewis, Tim; Notarbartolo di Sciara, Giuseppe; Panigada, Simone; Raga, Juan Antonio; Scheinin, Aviad; Ridoux, Vincent; Vella, Adriana; Vella, Joseph
2018-02-15
Heterogeneous data collection in the marine environment has led to large gaps in our knowledge of marine species distributions. To fill these gaps, models calibrated on existing data may be used to predict species distributions in unsampled areas, given that available data are sufficiently representative. Our objective was to evaluate the feasibility of mapping cetacean densities across the entire Mediterranean Sea using models calibrated on available survey data and various environmental covariates. We aggregated 302,481 km of line transect survey effort conducted in the Mediterranean Sea within the past 20 years by many organisations. Survey coverage was highly heterogeneous geographically and seasonally: large data gaps were present in the eastern and southern Mediterranean and in non-summer months. We mapped the extent of interpolation versus extrapolation and the proportion of data nearby in environmental space when models calibrated on existing survey data were used for prediction across the entire Mediterranean Sea. Using model predictions to map cetacean densities in the eastern and southern Mediterranean, characterised by warmer, less productive waters, and more intense eddy activity, would lead to potentially unreliable extrapolations. We stress the need for systematic surveys of cetaceans in these environmentally unique Mediterranean waters, particularly in non-summer months.
Di Stefano, Danilo Alessio; Arosio, Paolo
2016-01-01
Bone density at implant placement sites is one of the key factors affecting implant primary stability, which is a determinant for implant osseointegration and rehabilitation success. Site-specific bone density assessment is, therefore, of paramount importance. Recently, an implant micromotor endowed with an instantaneous torque-measuring system has been introduced. The aim of this study was to assess the reliability of this system. Five blocks with different densities (0.16, 0.26, 0.33, 0.49, and 0.65 g/cm(3)) were used. A single trained operator measured the density of one of them (0.33 g/cm(3)), by means of five different devices (20 measurements/device). The five resulting datasets were analyzed through the analysis of variance (ANOVA) model to investigate interdevice variability. As differences were not significant (P = .41), the five devices were each assigned to a different operator, who collected 20 density measurements for each block, both under irrigation (I) and without irrigation (NI). Measurements were pooled and averaged for each block, and their correlation with the actual block-density values was investigated using linear regression analysis. The possible effect of irrigation on density measurement was additionally assessed. Different devices provided reproducible, homogenous results. No significant interoperator variability was observed. Within the physiologic range of densities (> 0.30 g/cm(3)), the linear regression analysis showed a significant linear correlation between the mean torque measurements and the actual bone densities under both drilling conditions (r = 0.990 [I], r = 0.999 [NI]). Calibration lines were drawn under both conditions. Values collected under irrigation were lower than those collected without irrigation at all densities. The NI/I mean torque ratio was shown to decrease linearly with density (r = 0.998). The mean error introduced by the device-operator system was less than 10% in the range of normal jawbone density. Measurements performed with the device were linearly correlated with the blocks' bone densities. The results validate the device as an objective intraoperative tool for bone-density assessment that may contribute to proper jawbone-density evaluation and implant-insertion planning.
NASA Technical Reports Server (NTRS)
Klenzing, J.; Rowland, D.
2012-01-01
A fixed-bias spherical Langmuir probe is included as part of the Vector Electric Field Instrument (VEFI) suite on the Communication Navigation Outage Forecast System (CNOFS) satellite.CNOFS gathers data in the equatorial ionosphere between 400 and 860 km, where the primary constituent ions are H+ and O+. The ion current collected by the probe surface per unit plasma density is found to be a strong function of ion composition. The calibration of the collected current to an absolute density is discussed, and the performance of the spherical probe is compared to other in situ instruments on board the CNOFS satellite. The application of the calibration is discussed with respect to future fixed-bias probes; in particular, it is demonstrated that some density fluctuations will be suppressed in the collected current if the plasma composition rapidly changes along with density. This is illustrated in the observation of plasma density enhancements on CNOFS.
A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.
2015-09-08
A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less
NASA Astrophysics Data System (ADS)
Jansen Van Rensburg, G. J.; Kok, S.; Wilke, D. N.
2017-10-01
Different roll pass reduction schedules have different effects on the through-thickness properties of hot-rolled metal slabs. In order to assess or improve a reduction schedule using the finite element method, a material model is required that captures the relevant deformation mechanisms and physics. The model should also report relevant field quantities to assess variations in material state through the thickness of a simulated rolled metal slab. In this paper, a dislocation density-based material model with recrystallization is presented and calibrated on the material response of a high-strength low-alloy steel. The model has the ability to replicate and predict material response to a fair degree thanks to the physically motivated mechanisms it is built on. An example study is also presented to illustrate the possible effect different reduction schedules could have on the through-thickness material state and the ability to assess these effects based on finite element simulations.
The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...
Prostate Dose Escalation by a Innovative Inverse Planning-Driven IMRT
2008-11-01
density calibration was performed by scanning a phan- tom with inserts of known relative electron densities with respect to water (rwe ) and calibrating the...sim. 312, 91–112 2006. 11M. J. Murphy, “ Fracking moving organs in real time,” Semin. Radiat. Oncol. 141, 91–100 2004. 12P. C. Chi et al
NASA Technical Reports Server (NTRS)
Klenzing, Jeffrey H.; Rowland, Douglas E.
2012-01-01
A fixed-bias spherical Langmuir probe is included as part of the Vector Electric Field Instrument (VEFI) suite on the Communication Navigation Outage Forecast System (CNOFS) satellite.CNOFS gathers data in the equatorial ionosphere between 400 and 860 km, where the primary constituent ions are H+ and O+. The ion current collected by the probe surface per unit plasmadensity is found to be a strong function of ion composition. The calibration of the collected current to an absolute density is discussed, and the performance of the spherical probe is compared to other in situ instruments on board the CNOFS satellite. The application of the calibration is discussed with respect to future xed-bias probes; in particular, it is demonstrated that some density fluctuations will be suppressed in the collected current if the plasma composition rapidly changes along with density. This is illustrated in the observation of plasma density enhancements on CNOFS.
Ariga, Tomoko; Zhu, Yanbei; Ito, Mika; Takatsuka, Toshiko; Terauchi, Shinya; Kurokawa, Akira; Inagaki, Kazumi
2018-04-01
Area densities of Au/Ni/Cu layers on a Cr-coated quartz substrate were characterized to certify a multiple-metal-layer certified reference material (NMIJ CRM5208-a) that is intended for use in the analysis of the layer area density and the thickness by an X-ray fluorescence spectrometer. The area densities of Au/Ni/Cu layers were calculated from layer mass amounts and area. The layer mass amounts were determined by using wet chemical analyses, namely inductively coupled plasma mass spectrometry (ICP-MS), isotope-dilution (ID-) ICP-MS, and inductively coupled plasma optical emission spectrometry (ICP-OES) after dissolving the layers with diluted mixture of HCl and HNO 3 (1:1, v/v). Analytical results of the layer mass amounts obtained by the methods agreed well with each another within their uncertainty ranges. The area of the layer was determined by using a high-resolution optical scanner calibrated by Japan Calibration Service System (JCSS) standard scales. The property values of area density were 1.84 ± 0.05 μg/mm 2 for Au, 8.69 ± 0.17 μg/mm 2 for Ni, and 8.80 ± 0.14 μg/mm 2 for Cu (mean ± expanded uncertainty, coverage factor k = 2). In order to assess the reliability of these values, the density of each metal layer calculated from the property values of the area density and layer thickness measured by using a scanning electron microscope were compared with available literature values and good agreement between the observed values and values obtained in previous studies.
MTS-6 detectors calibration by using 239Pu-Be neutron source.
Wrzesień, Małgorzata; Albiniak, Łukasz; Al-Hameed, Hiba
2017-10-17
Thermoluminescent detectors, type MTS-6, containing isotope 6Li (lithium) are sensitive in the range of thermal neutron energy; the 239Pu-Be (plutonium-and-beryllium) source emits neutrons in the energy range from 1 to 11 MeV. These seemingly contradictory elements may be combined by using the paraffin moderator, a determined density of thermal neutrons in the paraffin block and a conversion coefficient neutron flux to kerma, not forgetting the simultaneous registration of the photon radiation inseparable from the companion neutron radiation. The main aim of this work is to present the idea of calibration of thermoluminescent detectors that consist of a 6Li isotope, by using 239Pu-Be neutron radiation source. In this work, MTS-6 and MTS-7 thermoluminescent detectors and a plutonium-and-beryllium (239Pu-Be) neutron source were used. Paraffin wax fills the block, acting as a moderator. The calibration idea was based on the determination of dose equivalent rate based on the average kerma rate calculated taking into account the empirically determined function describing the density of thermal neutron flux in the paraffin block and a conversion coefficient neutron flux to kerma. The calculated value of the thermal neutron flux density was 1817.5 neutrons/cm2/s and the average value of kerma rate determined on this basis amounted to 244 μGy/h, and the dose equivalent rate 610 μSv/h. The calculated value allowed for the assessment of the length of time of exposure of the detectors directly in the paraffin block. The calibration coefficient for the used batch of detectors is (6.80±0.42)×10-7 Sv/impulse. Med Pr 2017;68(6):705-710. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Estimating Density Using Precision Satellite Orbits from Multiple Satellites
NASA Astrophysics Data System (ADS)
McLaughlin, Craig A.; Lechtenberg, Travis; Fattig, Eric; Krishna, Dhaval Mysore
2012-06-01
This article examines atmospheric densities estimated using precision orbit ephemerides (POE) from several satellites including CHAMP, GRACE, and TerraSAR-X. The results of the calibration of atmospheric densities along the CHAMP and GRACE-A orbits derived using POEs with those derived using accelerometers are compared for various levels of solar and geomagnetic activity to examine the consistency in calibration between the two satellites. Densities from CHAMP and GRACE are compared when GRACE is orbiting nearly directly above CHAMP. In addition, the densities derived simultaneously from CHAMP, GRACE-A, and TerraSAR-X are compared to the Jacchia 1971 and NRLMSISE-00 model densities to observe altitude effects and consistency in the offsets from the empirical models among all three satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jeong Ae; Sohn, Bong Won; Jung, Taehyun
We present the catalog of the KVN Calibrator Survey (KVNCS). This first part of the KVNCS is a single-dish radio survey simultaneously conducted at 22 ( K band) and 43 GHz ( Q band) using the Korean VLBI Network (KVN) from 2009 to 2011. A total of 2045 sources are selected from the VLBA Calibrator Survey with an extrapolated flux density limit of 100 mJy at the K band. The KVNCS contains 1533 sources in the K band with a flux density limit of 70 mJy and 553 sources in the Q band with a flux density limit of 120more » mJy; it covers the whole sky down to −32.°5 in decl. We detected 513 sources simultaneously in the K and Q bands; ∼76% of them are flat-spectrum sources (−0.5 ≤ α ≤ 0.5). From the flux–flux relationship, we anticipated that most of the radiation of many of the sources comes from the compact components. The sources listed in the KVNCS therefore are strong candidates for high-frequency VLBI calibrators.« less
MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.
Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A
2011-12-01
DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keren, Y.; Bemporad, G.A.; Rubin, H.
This paper concerns an experimental evaluation of the basic aspects of operation of the advanced solar pond (ASP). Experiments wee carried out in a laboratory test section in order to assess the feasibility of the density gradient maintenance in stratified flowing layers. The density stratification was caused by a non uniform distribution of temperatures in the flow field. Results of the experiments are reported and analyzed in the paper. Experimental data were used in order to calibrate the numerical model able to simulate heat and momentum transfer in the ASP. The numerical results confirmed the validity of the numerical modelmore » adopted, and proved the latter applicability for the simulation of the ASP performance.« less
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less
Test surfaces useful for calibration of surface profilometers
Yashchuk, Valeriy V; McKinney, Wayne R; Takacs, Peter Z
2013-12-31
The present invention provides for test surfaces and methods for calibration of surface profilometers, including interferometric and atomic force microscopes. Calibration is performed using a specially designed test surface, or the Binary Pseudo-random (BPR) grating (array). Utilizing the BPR grating (array) to measure the power spectral density (PSD) spectrum, the profilometer is calibrated by determining the instrumental modulation transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jiahua; Penfold, Scott N., E-mail: scott.penfold@adelaide.edu.au
Purpose: The accuracy of proton dose calculation is dependent on the ability to correctly characterize patient tissues with medical imaging. The most common method is to correlate computed tomography (CT) numbers obtained via single-energy CT (SECT) with proton stopping power ratio (SPR). CT numbers, however, cannot discriminate between a change in mass density and change in chemical composition of patient tissues. This limitation can have consequences on SPR calibration accuracy. Dual-energy CT (DECT) is receiving increasing interest as an alternative imaging modality for proton therapy treatment planning due to its ability to discriminate between changes in patient density and chemicalmore » composition. In the current work we use a phantom of known composition to demonstrate the dosimetric advantages of proton therapy treatment planning with DECT over SECT. Methods: A phantom of known composition was scanned with a clinical SECT radiotherapy CT-simulator. The phantom was rescanned at a lower X-ray tube potential to generate a complimentary DECT image set. A set of reference materials similar in composition to the phantom was used to perform a stoichiometric calibration of SECT CT number to proton SPRs. The same set of reference materials was used to perform a DECT stoichiometric calibration based on effective atomic number. The known composition of the phantom was used to assess the accuracy of SPR calibration with SECT and DECT. Intensity modulated proton therapy (IMPT) treatment plans were generated with the SECT and DECT image sets to assess the dosimetric effect of the imaging modality. Isodose difference maps and root mean square (RMS) error calculations were used to assess dose calculation accuracy. Results: SPR calculation accuracy was found to be superior, on average, with DECT relative to SECT. Maximum errors of 12.8% and 2.2% were found for SECT and DECT, respectively. Qualitative examination of dose difference maps clearly showed the dosimetric advantages of DECT imaging, compared to SECT imaging for IMPT dose calculation for the case investigated. Quantitatively, the maximum dose calculation error in the SECT plan was 7.8%, compared to a value of 1.4% in the DECT plan. When considering the high dose target region, the root mean square (RMS) error in dose calculation was 2.1% and 0.4% for SECT and DECT, respectively. Conclusions: DECT-based proton treatment planning in a commercial treatment planning system was successfully demonstrated for the first time. DECT is an attractive imaging modality for proton therapy treatment planning owing to its ability to characterize density and chemical composition of patient tissues. SECT and DECT scans of a phantom of known composition have been used to demonstrate the dosimetric advantages obtainable in proton therapy treatment planning with DECT over the current approach based on SECT.« less
Establishing a method to measure bone structure using spectral CT
NASA Astrophysics Data System (ADS)
Ramyar, M.; Leary, C.; Raja, A.; Butler, A. P. H.; Woodfield, T. B. F.; Anderson, N. G.
2017-03-01
Combining bone structure and density measurement in 3D is required to assess site-specific fracture risk. Spectral molecular imaging can measure bone structure in relation to bone density by measuring macro and microstructure of bone in 3D. This study aimed to optimize spectral CT methodology to measure bone structure in excised bone samples. MARS CT with CdTe Medipix3RX detector was used in multiple energy bins to calibrate bone structure measurements. To calibrate thickness measurement, eight different thicknesses of Aluminium (Al) sheets were scanned one in air and the other around a falcon tube and then analysed. To test if trabecular thickness measurements differed depending on scan plane, a bone sample from sheep proximal tibia was scanned in two orthogonal directions. To assess the effect of air on thickness measurement, two parts of the same human femoral head were scanned in two conditions (in the air and in PBS). The results showed that the MARS scanner (with 90μm voxel size) is able to accurately measure the Al (in air) thicknesses over 200μm but it underestimates the thicknesses below 200μm because of partial volume effect in Al-air interface. The Al thickness measured in the highest energy bin is overestimated at Al-falcon tube interface. Bone scanning in two orthogonal directions gives the same trabecular thickness and air in the bone structure reduced measurement accuracy. We have established a bone structure assessment protocol on MARS scanner. The next step is to combine this with bone densitometry to assess bone strength.
Waveguide Calibrator for Multi-Element Probe Calibration
NASA Technical Reports Server (NTRS)
Sommerfeldt, Scott D.; Blotter, Jonathan D.
2007-01-01
A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The flexible nature of the tubes allows them to be positioned so that each tube terminates at one of the microphones of the energy density probe, which is mounted in the acrylic structure, or the calibrated reference microphone. Tests performed verify that the pressure did not vary due to bends in the tubes. The results of these tests indicate that the average sound pressure level in the tubes varied by only 0.03 dB as the tubes were bent to various angles. The current calibrator design is effective up to a frequency of approximately 4.5 kHz. This upper design frequency is largely due to the diameter of the plane-wave tubes.
Variability of dental cone beam CT grey values for density estimations
Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K
2013-01-01
Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537
NASA Astrophysics Data System (ADS)
Feng, Yiwei; Tiedje, Henry F.; Gagnon, Katherine; Fedosejevs, Robert
2018-04-01
Radiochromic film is used extensively in many medical, industrial, and scientific applications. In particular, the film is used in analysis of proton generation and in high intensity laser-plasma experiments where very high dose levels can be obtained. The present study reports calibration of the dose response of Gafchromic EBT3 and HD-V2 radiochromic films up to high exposure densities. A 2D scanning confocal densitometer system is employed to carry out accurate optical density measurements up to optical density 5 on the exposed films at the peak spectral absorption wavelengths. Various wavelengths from 400 to 740 nm are also scanned to extend the practical dose range of such films by measuring the response at wavelengths removed from the peak response wavelengths. Calibration curves for the optical density versus exposure dose are determined and can be used for quantitative evaluation of measured doses based on the measured optical densities. It was found that blue and UV wavelengths allowed the largest dynamic range though at some trade-off with overall accuracy.
Thomson scattering density calibration by Rayleigh and rotational Raman scattering on NSTX.
LeBlanc, B P
2008-10-01
The multipoint Thomson scattering diagnostic measures the profiles of the electron temperature T(e)(R) and density n(e)(R) on the horizontal midplane of NSTX. Normal operation makes use of Rayleigh scattering in nitrogen or argon to derive the density profile. While the Rayleigh scattering n(e)(R) calibration has been validated by comparison to other density measurements and through its correlation with plasma phenomena, it does require dedicated detectors at the laser wavelength in this filter polychromator based diagnostic. The presence of dust and/or stray laser light precludes routine use of these dedicated spectral channels for Thomson scattering measurement. Hence it is of interest to investigate the use of Raman scattering in nitrogen for the purpose of density calibration since it could free up detection equipment, which could then be used for the instrumentation of additional radial channels. In this paper the viewing optics "geometrical factor" profiles obtained from Rayleigh and Raman scattering are compared. While both techniques agree nominally, residual effects on the order of 10% remain and will be discussed.
Poster — Thur Eve — 15: Improvements in the stability of the tomotherapy imaging beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belec, J
2014-08-15
Use of helical TomoTherapy based MVCT imaging for adaptive planning requires the image values (HU) to remain stable over the course of treatment. In the past, the image value stability was suboptimal, which required frequent change to the image value to density calibration curve to avoid dose errors on the order of 2–4%. The stability of the image values at our center was recently improved by stabilizing the dose rate of the machine (dose control servo) and performing daily MVCT calibration corrections. In this work, we quantify the stability of the image values over treatment time by comparing patient treatmentmore » image density derived using MVCT and KVCT. The analysis includes 1) MVCT - KVCT density difference histogram, 2) MVCT vs KVCT density spectrum, 3) multiple average profile density comparison and 4) density difference in homogeneous locations. Over two months, the imaging beam stability was compromised several times due to a combination of target wobbling, spectral calibration, target change and magnetron issues. The stability of the image values were analyzed over the same period. Results show that the impact on the patient dose calculation is 0.7% +− 0.6%.« less
Radiometric analysis of photographic data by the effective exposure method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Constantine, B J
1972-04-01
The effective exposure method provides for radiometric analysis of photographic data. A three-dimensional model, where density is a function of energy and wavelength, is postulated to represent the film response function. Calibration exposures serve to eliminate the other factors which affect image density. The effective exposure causing an image can be determined by comparing the image density with that of a calibration exposure. If the relative spectral distribution of the source is known, irradiance and/or radiance can be unfolded from the effective exposure expression.
Morrell, Rachel E; Rogers, Andy
2004-12-21
Kodak EDR2 film has been calibrated across the range of exposure conditions encountered in our cardiac catheterization laboratory. Its dose-response function has been successfully modelled, up to the saturation point of 1 Gy. The most important factor affecting film sensitivity is the use of beam filtration. Spectral filtration and kVp together account for a variation in dose per optical density of -10% to +25%, at 160 mGy. The use of a dynamic wedge filter may cause doses to be underestimated by up to 6%. The film is relatively insensitive to variations in batch, field size, exposure rate, time to processing and day-to-day fluctuations in processor performance. Overall uncertainty in the calibration is estimated to be -20% to +40%, at 160 mGy. However, the uncertainty increases at higher doses, as the curve saturates. Artefacts were seen on a number of films, due to faults in the light-proofing of the film packets.
Calibration of phase contrast imaging on HL-2A Tokamak
NASA Astrophysics Data System (ADS)
Yu, Y.; Gong, S. B.; Xu, M.; Xiao, C. J.; Jiang, W.; Zhong, W. L.; Shi, Z. B.; Wang, H. J.; Wu, Y. F.; Yuan, B. D.; Lan, T.; Ye, M. Y.; Duan, X. R.; HL-2A Team
2017-10-01
Phase contrast imaging (PCI) has recently been developed on HL-2A tokamak. In this article we present the calibration of this diagnostic. This system is to diagnose chord integral density fluctuations by measuring the phase shift of a CO2 laser beam with a wavelength of 10.6 μm when the laser beam passes through plasma. Sound waves are used to calibrate PCI diagnostic. The signal series in different PCI channels show a pronounced modulation of incident laser beam by the sound wave. Frequency-wavenumber spectrum is achieved. Calibrations by sound waves with different frequencies exhibit a maximal wavenumber response of 12 cm-1. The conversion relationship between the chord integral plasma density fluctuation and the signal intensity is 2.3 × 1013 m-2/mV, indicating a high sensitivity.
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) onboard calibration system
NASA Technical Reports Server (NTRS)
Chrien, Thomas G.; Eastwood, Mike; Green, Robert O.; Sarture, Charles; Johnson, Howell; Chovit, Chris; Hajek, Pavel
1995-01-01
The AVIRIS instrument uses an onboard calibration system to provide auxiliary calibration data. The system consist of a tungsten halogen cycle lamp imaged onto a fiber bundle through an eight position filter wheel. The fiber bundle illuminates the back side of the foreoptics shutter during a pre-run and post-run calibration sequence. The filter wheel contains two neutral density filters, five spectral filters and one blocked position. This paper reviews the general workings of the onboard calibrator system and discusses recent modifications.
2007-09-01
Calibration curves for CT number ( Hounsfield unit )s vs. mineral density (g /c c...12 3 Figure 3.4. Calibration curves for CT number ( Hounsfield units ) vs. apparent density (g /c c...named Hounsfield units (HU) after Sir Godfrey Hounsfield . The CT number is K([i- iw]/pw), where K = a magnifying constant, which depends on the make of CT
The evolution of methods for establishing evolutionary timescales
2016-01-01
The fossil record is well known to be incomplete. Read literally, it provides a distorted view of the history of species divergence and extinction, because different species have different propensities to fossilize, the amount of rock fluctuates over geological timescales, as does the nature of the environments that it preserves. Even so, patterns in the fossil evidence allow us to assess the incompleteness of the fossil record. While the molecular clock can be used to extend the time estimates from fossil species to lineages not represented in the fossil record, fossils are the only source of information concerning absolute (geological) times in molecular dating analysis. We review different ways of incorporating fossil evidence in modern clock dating analyses, including node-calibrations where lineage divergence times are constrained using probability densities and tip-calibrations where fossil species at the tips of the tree are assigned dates from dated rock strata. While node-calibrations are often constructed by a crude assessment of the fossil evidence and thus involves arbitrariness, tip-calibrations may be too sensitive to the prior on divergence times or the branching process and influenced unduly affected by well-known problems of morphological character evolution, such as environmental influence on morphological phenotypes, correlation among traits, and convergent evolution in disparate species. We discuss the utility of time information from fossils in phylogeny estimation and the search for ancestors in the fossil record. This article is part of the themed issue ‘Dating species divergences using rocks and clocks’. PMID:27325838
The evolution of methods for establishing evolutionary timescales.
Donoghue, Philip C J; Yang, Ziheng
2016-07-19
The fossil record is well known to be incomplete. Read literally, it provides a distorted view of the history of species divergence and extinction, because different species have different propensities to fossilize, the amount of rock fluctuates over geological timescales, as does the nature of the environments that it preserves. Even so, patterns in the fossil evidence allow us to assess the incompleteness of the fossil record. While the molecular clock can be used to extend the time estimates from fossil species to lineages not represented in the fossil record, fossils are the only source of information concerning absolute (geological) times in molecular dating analysis. We review different ways of incorporating fossil evidence in modern clock dating analyses, including node-calibrations where lineage divergence times are constrained using probability densities and tip-calibrations where fossil species at the tips of the tree are assigned dates from dated rock strata. While node-calibrations are often constructed by a crude assessment of the fossil evidence and thus involves arbitrariness, tip-calibrations may be too sensitive to the prior on divergence times or the branching process and influenced unduly affected by well-known problems of morphological character evolution, such as environmental influence on morphological phenotypes, correlation among traits, and convergent evolution in disparate species. We discuss the utility of time information from fossils in phylogeny estimation and the search for ancestors in the fossil record.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Authors.
Detonation Shock Dynamics (DSD) Calibration for LX-17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslam, Tariq D
2012-04-24
The goal of this report is to summarize the results of a Detonation shock dynamics (DSD) calibration for the explosive LX-17. Considering that LX-17 is very similar to PBX 9502 (LX-17 is 92.5% TATB with 7.5% Kel-F 800 binder, while PBX 9502 is 95% TATB with 5% Kel-F 800 binder), we proceed with the analysis assuming many of the DSD constants are the same. We only change the parameters D{sub CJ}, B and {bar C}{sub 6} ({bar C}{sub 6} controls the how D{sub CJ} changes with pressing density). The parameters D{sub CJ} and {bar C}{sub 6} were given by Joshmore » Coe and Sam Shaw's EOS. So, only B was optimized in fitting all the calibration data. This report first discusses some general DSD background, followed by a presentation of the available dataset to perform the calibration, and finally gives the results of the calibration and draws some conclusions. A DSD calibration of LX-17 has been conducted using the existing diameter effect data and shock shape records. The new DSD fit is based off the current PBX 9502 calibration and takes into account the effect of pressing density. Utilizing the PBX 9502 calibration, the effects of initial temperature can also be taken into account.« less
Calibration of the Oscillating Screen Viscometer
NASA Technical Reports Server (NTRS)
Berg, Robert F.; Moldover, Michael R.
1993-01-01
We have devised a calibration procedure for the oscillating screen viscometer which can provide the accuracy needed for the flight measurement of viscosity near the liquid-vapor critical point of xenon. The procedure, which makes use of the viscometer's wide bandwidth and hydrodynamic similarity, allows the viscometer to be self-calibrating. To demonstrate the validity of this procedure we measured the oscillator's transfer function under a wide variety of conditions. We obtained data using CO2 at temperatures spanning a temperature range of 35 K and densities varying by a factor of 165, thereby encountering viscosity variations as great as 50%. In contrast the flight experiment will be performed over a temperature range of 29 K and at only a single density, and the viscosity is expected to change by less than 40%. The measurements show that, after excluding data above 10 Hz (where frequency-dependent corrections are poorly modeled) and making a plausible adjustment to the viscosity value used at high density, the viscometer's behavior is fully consistent with the use of hydrodynamic similarity for calibration. Achieving this agreement required understanding a 1% anelastic effect present in the oscillator's torsion fiber.
Nuclear moisture-density evaluation.
DOT National Transportation Integrated Search
1964-11-01
This report constitutes the results of a series of calibration curves prepared by comparing the Troxler Nuclear Density - Moisture Gauge count ratios with conventional densities as obtained by the Soiltest Volumeter and the sand displacement methods....
van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M
2005-07-01
To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.
A new form of the calibration curve in radiochromic dosimetry. Properties and results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B
Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less
A new form of the calibration curve in radiochromic dosimetry. Properties and results.
Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio
2016-07-01
This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer-Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.
Determination of the line shapes of atomic nitrogen resonance lines by magnetic scans
NASA Technical Reports Server (NTRS)
Lawrence, G. M.; Stone, E. J.; Kley, D.
1976-01-01
A technique is given for calibrating an atomic nitrogen resonance lamp for use in determining column densities of atoms in specific states. A discharge lamp emitting the NI multiplets at 1200 A and 1493 A is studied by obtaining absorption by atoms in a magnetic field (0-2.5 T). This magnetic scanning technique enables the determination of the absorbing atom column density, and an empirical curve of growth is obtained because the atomic f-value is known. Thus, the calibrated lamp can be used in the determination of atomic column densities.
SURFplus Model Calibration for PBX 9502
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2017-12-06
The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent ofmore » the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.« less
Planning Training Workload in Football Using Small-Sided Games' Density.
Sangnier, Sebastien; Cotte, Thierry; Brachet, Olivier; Coquart, Jeremy; Tourny, Claire
2018-05-08
Sangnier, S, Cotte, T, Brachet, O, Coquart, J, and Tourny, C. Planning training workload in football using small-sided games density. J Strength Cond Res XX(X): 000-000, 2018-To develop the physical qualities, the small-sided games' (SSGs) density may be essential in soccer. Small-sided games are games in which the pitch size, players' number, and rules are different to those for traditional soccer matches. The purpose was to assess the relation between training workload and SSGs' density. The 33 densities data (41 practice games and 3 full games) were analyzed through global positioning system (GPS) data collected from 25 professional soccer players (80.7 ± 7.0 kg; 1.83 ± 0.05 m; 26.4 ± 4.9 years). From total distance, distance metabolic power, sprint distance, and acceleration distance, the data GPS were divided into 4 categories: endurance, power, speed, and strength. Statistical analysis compared the relation between GPS values and SSGs' densities, and 3 methods were applied to assess models (R-squared, root-mean-square error, and Akaike information criterion). The results suggest that all the GPS data match the player's essential athletic skills. They were all correlated with the game's density. Acceleration distance, deceleration distance, metabolic power, and total distance followed a logarithmic regression model, whereas distance and number of sprints follow a linear regression model. The research reveals options to monitor the training workload. Coaches could anticipate the load resulting from the SSGs and adjust the field size to the players' number. Taking into account the field size during SSGs enables coaches to target the most favorable density for developing expected physical qualities. Calibrating intensity during SSGs would allow coaches to assess each athletic skill in the same conditions of intensity as in the competition.
Precision calibration of the silicon doping level in gallium arsenide epitaxial layers
NASA Astrophysics Data System (ADS)
Mokhov, D. V.; Berezovskaya, T. N.; Kuzmenkov, A. G.; Maleev, N. A.; Timoshnev, S. N.; Ustinov, V. M.
2017-10-01
An approach to precision calibration of the silicon doping level in gallium arsenide epitaxial layers is discussed that is based on studying the dependence of the carrier density in the test GaAs layer on the silicon- source temperature using the Hall-effect and CV profiling techniques. The parameters are measured by standard or certified measuring techniques and approved measuring instruments. It is demonstrated that the use of CV profiling for controlling the carrier density in the test GaAs layer at the thorough optimization of the measuring procedure ensures the highest accuracy and reliability of doping level calibration in the epitaxial layers with a relative error of no larger than 2.5%.
Soil specific re-calibration of water content sensors for a field-scale sensor network
NASA Astrophysics Data System (ADS)
Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.
2015-04-01
Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.
A Compound Sensor for Simultaneous Measurement of Packing Density and Moisture Content of Silage.
Meng, Delun; Meng, Fanjia; Sun, Wei; Deng, Shuang
2017-12-28
Packing density and moisture content are important factors in investigating the ensiling quality. Low packing density is a major cause of loss of sugar content. The moisture content also plays a determinant role in biomass degradation. To comprehensively evaluate the ensiling quality, this study focused on developing a compound sensor. In it, moisture electrodes and strain gauges were embedded into an ASABE Standard small cone for the simultaneous measurements of the penetration resistance (PR) and moisture content (MC) of silage. In order to evaluate the performance of the designed sensor and the theoretical analysis being used, relevant calibration and validation tests were conducted. The determination coefficients are 0.996 and 0.992 for PR calibration and 0.934 for MC calibration. The validation indicated that this measurement technique could determine the packing density and moisture content of the silage simultaneously and eliminate the influence of the friction between the penetration shaft and silage. In this study, we not only design a compound sensor but also provide an alternative way to investigate the ensiling quality which would be useful for further silage research.
Nuclear Gauge Calibration and Testing Guidelines for Hawaii
DOT National Transportation Integrated Search
2006-12-15
Project proposal brief: AASHTO and ASTM nuclear gauge testing procedures can lead to misleading density and moisture readings for certain Hawaiian soils. Calibration curves need to be established for these unique materials, along with clear standard ...
Absolute calibration of Phase Contrast Imaging on HL-2A tokamak
NASA Astrophysics Data System (ADS)
Yu, Yi; Gong, Shaobo; Xu, Min; Wu, Yifan; Yuan, Boda; Ye, Minyou; Duan, Xuru; HL-2A Team Team
2017-10-01
Phase contrast imaging (PCI) has recently been developed on HL-2A tokamak. In this article we present the calibration of this diagnostic. This system is to diagnose chord integral density fluctuations by measuring the phase shift of a CO2 laser beam with a wavelength of 10.6 μm when the laser beam passes through plasma. Sound waves are used to calibrate PCI diagnostic. The signal series in different PCI channels show a pronounced modulation of incident laser beam by the sound wave. Frequency-wavenumber spectrum is achieved. Calibrations by sound waves with different frequencies exhibit a maximal wavenumber response of 12 cm-1. The conversion relationship between the chord integral plasma density fluctuation and the signal intensity is 2.3-1013 m-2/mV, indicating a high sensitivity. Supported by the National Magnetic Confinement Fusion Energy Research Project (Grant No.2015GB120002, 2013GB107001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wullschleger, Stan D; Childs, Kenneth W; King, Anthony Wayne
2011-01-01
A variety of thermal approaches are used to estimate sap flux density in stems of woody plants. Models have proven valuable tools for interpreting the behavior of heat pulse, heat balance, and heat field deformation techniques, but have seldom been used to describe heat transfer dynamics for the heat dissipation method. Therefore, to better understand the behavior of heat dissipation probes, a model was developed that takes into account the thermal properties of wood, the physical dimensions and thermal characteristics of the probes, and the conductive and convective heat transfer that occurs due to water flow in the sapwood. Probesmore » were simulated as aluminum tubes 20 mm in length and 2 mm in diameter, whereas sapwood, heartwood, and bark each had a density and water fraction that determined their thermal properties. Base simulations assumed a constant sap flux density with sapwood depth and no wounding or physical disruption of xylem beyond the 2 mm diameter hole drilled for probe installation. Simulations across a range of sap flux densities showed that the dimensionless quantity k defined as ( Tm T)/ T where Tm is the temperature differential ( T) between the heated and unheated probe under zero flow conditions was dependent on the thermal conductivity of the sapwood. The relationship between sap flux density and k was also sensitive to radial gradients in sap flux density and to xylem disruption near the probe. Monte Carlo analysis in which 1000 simulations were conducted while simultaneously varying thermal conductivity and wound diameter revealed that sap flux density and k showed considerable departure from the original calibration equation used with this technique. The departure was greatest for abrupt patterns of radial variation typical of ring-porous species. Depending on the specific combination of thermal conductivity and wound diameter, use of the original calibration equation resulted in an 81% under- to 48% over-estimation of sap flux density at modest flux rates. Future studies should verify these simulations and assess their utility in estimating sap flux density for this widely used technique.« less
Analysis of dental supportive structures in orthodontic therapy.
Pavicin, Ivana Savić; Ivosević-Magdalenić, Natasa; Badel, Tomislav; Basić, Kresimir; Keros, Jadranka
2012-09-01
The purpose was to define the impact of orthodontic appliances on the density of the underlying dental bone tissue. Radiographic images of teeth were made in 27 study subjects before and twelve months after fixed orthodontic appliances were carried. The radiographs were digitalized and the levels of gray at sites where the greatest bone resorption was expected were transformed into optic density. In the standardization and comparison of values from the first and the second measurements the copper calibration wedge--a stepwedge--was used. Optic densities in the observed sites were compared with optic densities of the calibration wedge and expressed as their thickness equivalent. The study results showed no statistically significant difference in bone densities, indicating that the orthodontic therapy was properly planned and carried out and that excessive forces were not used in the applied correctional procedures.
NASA Astrophysics Data System (ADS)
Fossati, M.; Wilman, D. J.; Fontanot, F.; De Lucia, G.; Monaco, P.; Hirschmann, M.; Mendel, J. T.; Beifiori, A.; Contini, E.
2015-01-01
A well-calibrated method to describe the environment of galaxies at all redshifts is essential for the study of structure formation. Such a calibration should include well-understood correlations with halo mass, and the possibility to identify galaxies which dominate their potential well (centrals), and their satellites. Focusing on z ˜ 1 and 2, we propose a method of environmental calibration which can be applied to the next generation of low- to medium-resolution spectroscopic surveys. Using an up-to-date semi-analytic model of galaxy formation, we measure the local density of galaxies in fixed apertures on different scales. There is a clear correlation of density with halo mass for satellite galaxies, while a significant population of low-mass centrals is found at high densities in the neighbourhood of massive haloes. In this case, the density simply traces the mass of the most massive halo within the aperture. To identify central and satellite galaxies, we apply an observationally motivated stellar mass rank method which is both highly pure and complete, especially in the more massive haloes where such a division is most meaningful. Finally, we examine a test case for the recovery of environmental trends: the passive fraction of galaxies and its dependence on stellar and halo mass for centrals and satellites. With careful calibration, observationally defined quantities do a good job of recovering known trends in the model. This result stands even with reduced redshift accuracy, provided the sample is deep enough to preserve a wide dynamic range of density.
X-Ray Fluorescence Determination of the Surface Density of Chromium Nanolayers
NASA Astrophysics Data System (ADS)
Mashin, N. I.; Chernjaeva, E. A.; Tumanova, A. N.; Ershov, A. A.
2014-01-01
An auxiliary system consisting of thin-film layers of chromium deposited on a polymer film substrate is used to construct calibration curves for the relative intensities of the K α lines of chromium on bulk substrates of different elements as functions of the chromium surface density in the reference samples. Correction coefficients are calculated to take into account the absorption of primary radiation from an x-ray tube and analytical lines of the constituent elements of the substrate. A method is developed for determining the surface density of thin films of chromium when test and calibration samples are deposited on substrates of different materials.
Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A
2012-03-07
Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.
Do Skeletal Density Changes Within the Tissue Layer of Corals Affect Paleoclimate Reconstructions?
NASA Astrophysics Data System (ADS)
Griffiths, J. S.; DeLong, K. L.; Quinn, T.; Taylor, F. W.; Kilbourne, K. H.; Wagner, A. J.
2016-02-01
Sea surface temperature (SST) reconstructions from coral geochemistry provide information on past climate variability; however, not all coral studies agree on a common calibration slope. Therefore, understanding the impacts of coral skeletal growth on strontium-to-calcium ratios (Sr/Ca) and oxygen isotopic ratios (δ18O) is necessary to ensure accurate calibrations. The study of Gagan et al. (2012) suggests that for the Pacific coral genera Porites, SST calibrations for coral Sr/Ca and δ18O need to be adjusted to account for skeletal density changes in the tissue layer, which may attenuate the seasonal cycle in coral geochemistry. We attempt to duplicate those results and density patterns in several Porites lutea colonies from two locations, yet our results do not show an increase in density in the tissue layer. Another study with Montastraea faveolata reveals reduced seasonality in coral Sr/Ca compared to slower-growing Siderastrea siderea in close proximity and same water depth, suggesting the faster growing M. faveolata geochemistry may be attenuated. By measuring skeletal density changes by micromilling a standard volume throughout the tissue layer and immediately below, we find no pattern of skeletal accumulation in the tissue layer of multiple colonies of M. faveolata and S. siderea from different locations. We conclude that these species lay down all of their skeletal material at the skeleton surface, thus skeletal density changes in the tissue layer do not account for reduced seasonality. We propose that time averaging occurs in M. faveolata as a result of the coral polyp's deep calyces mixing time intervals in the adjacent thecal wall in which micromilling for geochemical analysis produces a sample area that contains several growth increments. Our results show that skeletal density growth effects cannot be applied to all coral genera and paves the way for new research on calyx depth as an alternative explanation for differences in coral calibration slopes.
NASA Astrophysics Data System (ADS)
Jasmin Sterken, Veerle; Moragas-Klostermeyer, Georg; Hillier, Jon; Fielding, Lee; Lovett, Joseph; Armes, Steven; Fechler, Nina; Srama, Ralf; Bugiel, Sebastian; Hornung, Klaus
2016-10-01
Impact ionization experiments have been performed since more than 40 years for calibrating cosmic dust detectors. A linear Van de Graaff dust accelerator was used to accelerate the cosmic dust analogues of submicron to micron-size to speeds up to 80 km s^-1. Different materials have been used for calibration: iron, carbon, metal-coated minerals and most recently, minerals coated with conductive polymers. While different materials with different densities have been used for instrument calibration, a comparative analysis of dust impacts of equal material but different density is necessary: porous or aggregate-like particles are increasingly found to be present in the solar system: e.g. dust from comet 67P Churyumov-Gerasimenko [Fulle et al 2015], aggregate particles from the plumes of Enceladus [Gao et al 2016], and low-density interstellar dust [Westphal 2014 et al, Sterken et al 2015]. These recalibrations are relevant for measuring the size distributions of interplanetary and interstellar dust and thus mass budgets like the gas-to-dust mass ratio in the local interstellar cloud.We report about the calibrations that have been performed at the Heidelberg dust accelerator facility for investigating the influence of particle density on the impact ionization charge. We used the Cassini Cosmic Dust Analyzer for the target, and compared hollow versus compact silica particles in our study as a first attempt to investigate experimentally the influence of dust density on the signals obtained. Also, preliminary tests with carbon aerogel were performed, and (unsuccessful) attempts to accelerate silica aerogel. In this talk we explain the motivation of the study, the experiment set-up, the preparation of — and the materials used, the results and plans and recommendations for future tests.Fulle, M. et al 2015, The Astrophysical Journal Letters, Volume 802, Issue 1, article id. L12, 5 pp. (2015)Gao, P. et al 2016, Icarus, Volume 264, p. 227-238Westphal, A. et al 2014, Science, Volume 345, Issue 6198, pp. 786-791 (2014)Sterken, V.J. et al 2015, The Astrophysical Journal, Volume 812, Issue 2, article id. 141, 24 pp. (2015)
Microwave Interferometry (90 GHz) for Hall Thruster Plume Density Characterization
2005-06-01
Hall thruster . The interferometer has been modified to overcome initial difficulties encountered during the preliminary testing. The modifications include the ability to perform remote and automated calibrations as well as an aluminum enclosure to shield the interferometer from the Hall thruster plume. With these modifications, it will be possible to make unambiguous electron density measurements of the thruster plume as well as to rapidly and automatically calibrate the interferometer to eliminate the effects of signal drift. Due to the versatility
NASA Technical Reports Server (NTRS)
Gangopadhyay, P.; Judge, D. L.
1996-01-01
Our knowledge of the various heliospheric phenomena (location of the solar wind termination shock, heliopause configuration and very local interstellar medium parameters) is limited by uncertainties in the available heliospheric plasma models and by calibration uncertainties in the observing instruments. There is, thus, a strong motivation to develop model insensitive and calibration independent methods to reduce the uncertainties in the relevant heliospheric parameters. We have developed such a method to constrain the downstream neutral hydrogen density inside the heliospheric tail. In our approach we have taken advantage of the relative insensitivity of the downstream neutral hydrogen density profile to the specific plasma model adopted. We have also used the fact that the presence of an asymmetric neutral hydrogen cavity surrounding the sun, characteristic of all neutral densities models, results in a higher multiple scattering contribution to the observed glow in the downstream region than in the upstream region. This allows us to approximate the actual density profile with one which is spatially uniform for the purpose of calculating the downstream backscattered glow. Using different spatially constant density profiles, radiative transfer calculations are performed, and the radial dependence of the predicted glow is compared with the observed I/R dependence of Pioneer 10 UV data. Such a comparison bounds the large distance heliospheric neutral hydrogen density in the downstream direction to a value between 0.05 and 0.1/cc.
Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D
Brookman, M. W.; Austin, M. E.; McLean, A. G.; ...
2016-08-08
Thomson scattering (TS) produces n e profiles from measurement of scattered laser beam intensity. In the case of Rayleigh scattering, it provides a first calibration of the relation n e / ITS, which depends on many factors (e.g. laser alignment and power, optics, and measurement systems). On DIII-D, the n e calibration is adjusted for each laser and optic path against an absolute n e measurement from a density-driven cutoff on the 48 channel 2nd harmonic X-mode electron cyclotron emission (ECE) system. This method has been used to calibrate Thompson densities from the edge to near the core (r/a >more » 0.15). Application of core electron cyclotron heating improves the quality of cutoff and depth of its penetration into the core. ECH also changes underlying MHD activity. Furthermore, on the removal of ECH power, cutoff penetrates in from the edge to the core and channels fall successively and smoothly into cutoff. This improves the quality of the TS n e calibration while minimizing wall loading.« less
Absolute flux density calibrations of radio sources: 2.3 GHz
NASA Technical Reports Server (NTRS)
Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.
1977-01-01
A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.
Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel
Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.
2016-03-16
Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less
(abstract) Absolute Flux Calibrations of Venus and Jupiter at 32 GHz
NASA Technical Reports Server (NTRS)
Gatti, Mark S.; Klein, Michael J.
1994-01-01
The microwave flux densities of Venus and Jupiter at 32 GHz have been measured using a calibration standard radio telescope system at the Owens Valley Radio Observatory (OVRO) during April and May of 1993. These measurements are part of a joint JPL/Caltech program to accurately calibrate a catalog of other radio sources using the two bright planets as flux standards.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.
2012-04-01
The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.
NASA Astrophysics Data System (ADS)
Skowronek, Sandra; Van De Kerchove, Ruben; Rombouts, Bjorn; Aerts, Raf; Ewald, Michael; Warrie, Jens; Schiefer, Felix; Garzon-Lopez, Carol; Hattab, Tarek; Honnay, Olivier; Lenoir, Jonathan; Rocchini, Duccio; Schmidtlein, Sebastian; Somers, Ben; Feilhauer, Hannes
2018-06-01
Remote sensing is a promising tool for detecting invasive alien plant species. Mapping and monitoring those species requires accurate detection. So far, most studies relied on models that are locally calibrated and validated against available field data. Consequently, detecting invasive alien species at new study areas requires the acquisition of additional field data which can be expensive and time-consuming. Model transfer might thus provide a viable alternative. Here, we mapped the distribution of the invasive alien bryophyte Campylopus introflexus to i) assess the feasibility of spatially transferring locally calibrated models for species detection between four different heathland areas in Germany and Belgium and ii) test the potential of combining calibration data from different sites in one species distribution model (SDM). In a first step, four different SDMs were locally calibrated and validated by combining field data and airborne imaging spectroscopy data with a spatial resolution ranging from 1.8 m to 4 m and a spectral resolution of about 10 nm (244 bands). A one-class classifier, Maxent, which is based on the comparison of probability densities, was used to generate all SDMs. In a second step, each model was transferred to the three other study areas and the performance of the models for predicting C. introflexus occurrences was assessed. Finally, models combining calibration data from three study areas were built and tested on the remaining fourth site. In this step, different combinations of Maxent modelling parameters were tested. For the local models, the area under the curve for a test dataset (test AUC) was between 0.57-0.78, while the test AUC for the single transfer models ranged between 0.45-0.89. For the combined models the test AUC was between 0.54-0.9. The success of transferring models calibrated in one site to another site highly depended on the respective study site; the combined models provided higher test AUC values than the locally calibrated models for three out of four study sites. Furthermore, we also demonstrated the importance of optimizing the Maxent modelling parameters. Overall, our results indicate the potential of a combined model to map C. introflexus without the need for new calibration data.
Krueger, Diane; Libber, Jessie; Sanfilippo, Jennifer; Yu, Hui Jing; Horvath, Blaine; Miller, Colin G; Binkley, Neil
2016-01-01
New densitometer installation requires cross-calibration for accurate longitudinal assessment. When replacing a unit with the same model, the International Society for Clinical Densitometry recommends cross-calibrating by scanning phantoms 10 times on each instrument and states that spine bone mineral density (BMD) should be within 1%, whereas total body lean, fat, and %fat mass should be within 2% of the prior instrument. However, there is limited validation that these recommendations provide adequate total body cross-calibration. Here, we report a total body cross-calibration experience with phantoms and humans. Cross-calibration between an existing and new Lunar iDXA was performed using 3 encapsulated spine phantoms (GE [GE Lunar, Madison, WI], BioClinica [BioClinica Inc, Princeton, NJ], and Hologic [Hologic Inc, Bedford, MA]), 1 total body composition phantom (BioClinica), and 30 human volunteers. Thirty scans of each phantom and a total body scan of human volunteers were obtained on each instrument. All spine phantom BMD means were similar (within 1%; <-0.010 g/cm2 bias) between the existing and new dual-energy X-ray absorptiometry unit. The BioClinica body composition phantom (BBCP) BMD and bone mineral content (BMC) values were within 2% with biases of 0.005 g/cm2 and -3.4 g. However, lean and fat mass and %fat differed by 4.6%-7.7% with biases of +463 g, -496 g, and -2.8%, respectively. In vivo comparison supported BBCP data; BMD and BMC were within ∼2%, but lean and fat mass and %fat differed from 1.6% to 4.9% with biases of +833 g, -860 g, and -1.1%. As all body composition comparisons exceeded the recommended 2%, the new densitometer was recalibrated. After recalibration, in vivo bias was lower (<0.05%) for lean and fat; -23 and -5 g, respectively. Similarly, BBCP lean and fat agreement improved. In conclusion, the BBCP behaves similarly, but not identical, to human in vivo measurements for densitometer cross-calibration. Spine phantoms, despite good BMD and BMC agreement, did not detect substantial lean and fat differences observed using BBCP and in vivo assessments. Consequently, spine phantoms are inadequate for dual-energy X-ray absorptiometry whole body composition cross-calibration. Copyright © 2016 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
A computer program for borehole compensation of dual-detector density well logs
Scott, James Henry
1978-01-01
The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Li, X; Liu, G
Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less
Physics of High Temperature, Dense Plasmas.
1984-01-01
symmetry check, the amplitude and the time of arrival of these three signals can also be used to con- firm the proper calibration of our data acquisition...calibration factors allow us to scale the oscillogram signals directly to current without having to calculate the magnetic field first. 60...converted optical densities to relative spectral intensities, I(x,y), using the film response calibration information of Equation (1). At this point
A comparison of calibration techniques for hot-wires operated in subsonic compressible slip flows
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Stainback, P. C.; Nagabushana, K. A.
1992-01-01
This paper focuses on the correlation of constant temperature anemometer voltages to velocity, density, and total temperature in the transonic slip flow regime. Three different calibration schemes were evaluated. The ultimate use of these hot-wire calibrations is to obtain fluctuations in the flow variables. Without the appropriate mean flow sensitivities of the heated wire, the measurements of these fluctuations cannot be accurately determined.
Neural networks for calibration tomography
NASA Technical Reports Server (NTRS)
Decker, Arthur
1993-01-01
Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
SU-D-BRC-04: Development of Proton Tissue Equivalent Materials for Calibration and Dosimetry Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olguin, E; Flampouri, S; Lipnharski, I
Purpose: To develop new proton tissue equivalent materials (PTEM), urethane and fiberglass based, for proton therapy calibration and dosimetry studies. Existing tissue equivalent plastics are applicable only for x-rays because they focus on matching mass attenuation coefficients. This study aims to create new plastics that match mass stopping powers for proton therapy applications instead. Methods: New PTEMs were constructed using urethane and fiberglass resin materials for soft, fat, bone, and lung tissue. The stoichiometric analysis method was first used to determine the elemental composition of each unknown constituent. New initial formulae were then developed for each of the 4 PTEMsmore » using the new elemental compositions and various additives. Samples of each plastic were then created and exposed to a well defined proton beam at the UF Health Proton Therapy Institute (UFHPTI) to validate its mass stopping power. Results: The stoichiometric analysis method revealed the elemental composition of the 3 components used in creating the PTEMs. These urethane and fiberglass based resins were combined with additives such as calcium carbonate, aluminum hydroxide, and phenolic micro spheres to achieve the desired mass stopping powers and densities. Validation at the UFHPTI revealed adjustments had to be made to the formulae, but the plastics eventually had the desired properties after a few iterations. The mass stopping power, density, and Hounsfield Unit of each of the 4 PTEMs were within acceptable tolerances. Conclusion: Four proton tissue equivalent plastics were developed: soft, fat, bone, and lung tissue. These plastics match each of the corresponding tissue’s mass stopping power, density, and Hounsfield Unit, meaning they are truly tissue equivalent for proton therapy applications. They can now be used to calibrate proton therapy treatment planning systems, improve range uncertainties, validate proton therapy Monte Carlo simulations, and assess in-field and out-of-field organ doses.« less
[Automated analyser of organ cultured corneal endothelial mosaic].
Gain, P; Thuret, G; Chiquet, C; Gavet, Y; Turc, P H; Théillère, C; Acquart, S; Le Petit, J C; Maugery, J; Campos, L
2002-05-01
Until now, organ-cultured corneal endothelial mosaic has been assessed in France by cell counting using a calibrated graticule, or by drawing cells on a computerized image. The former method is unsatisfactory because it is characterized by a lack of objective evaluation of the cell surface and hexagonality and it requires an experienced technician. The latter method is time-consuming and requires careful attention. We aimed to make an efficient, fast and easy to use, automated digital analyzer of video images of the corneal endothelium. The hardware included a PC Pentium III ((R)) 800 MHz-Ram 256, a Data Translation 3155 acquisition card, a Sony SC 75 CE CCD camera, and a 22-inch screen. Special functions for automated cell boundary determination consisted of Plug-in programs included in the ImageTool software. Calibration was performed using a calibrated micrometer. Cell densities of 40 organ-cultured corneas measured by both manual and automated counting were compared using parametric tests (Student's t test for paired variables and the Pearson correlation coefficient). All steps were considered more ergonomic i.e., endothelial image capture, image selection, thresholding of multiple areas of interest, automated cell count, automated detection of errors in cell boundary drawing, presentation of the results in an HTML file including the number of counted cells, cell density, coefficient of variation of cell area, cell surface histogram and cell hexagonality. The device was efficient because the global process lasted on average 7 minutes and did not require an experienced technician. The correlation between cell densities obtained with both methods was high (r=+0.84, p<0.001). The results showed an under-estimation using manual counting (2191+/-322 vs. 2273+/-457 cell/mm(2), p=0.046), compared with the automated method. Our automated endothelial cell analyzer is efficient and gives reliable results quickly and easily. A multicentric validation would allow us to standardize cell counts among cornea banks in our country.
Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery
NASA Astrophysics Data System (ADS)
Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.
2016-06-01
Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.
Comparison of density determination of liquid samples by density meters
NASA Astrophysics Data System (ADS)
Buchner, C.; Wolf, H.; Vámossy, C.; Lorefice, S.; Lenard, E.; Spohr, I.; Mares, G.; Perkin, M.; Parlic-Risovic, T.; Grue, L.-L.; Tammik, K.; van Andel, I.; Zelenka, Z.
2016-01-01
Hydrostatic density determinations of liquids as reference material are mainly performed by National Metrology Institutes to provide means for calibrating or checking liquid density measuring instruments such as oscillation-type density meters. These density meters are used by most of the metrology institutes for their calibration and scientific work. The aim of this project was to compare the results of the liquid density determination by oscillating density meters of the participating laboratories. The results were linked to CCM.D.K-2 partly via Project EURAMET.M.D.K-2 (1019) "Comparison of liquid density standards" by hydrostatic weighing piloted by BEV in 2008. In this comparison pentadecane, water and of oil with a high viscosity were measured at atmospheric pressure using oscillation type density meter. The temperature range was from 15 °C to 40 °C. The measurement results were in some cases discrepant. Further studies, comparisons are essential to explore the capability and uncertainty of the density meters Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
Aminzadeh, Reza; Thielens, Arno; Bamba, Aliou; Kone, Lamine; Gaillot, Davy Paul; Lienard, Martine; Martens, Luc; Joseph, Wout
2016-07-01
For the first time, response of personal exposimeters (PEMs) is studied under diffuse field exposure in indoor environments. To this aim, both numerical simulations, using finite-difference time-domain method, and calibration measurements were performed in the range of 880-5875 MHz covering 10 frequency bands in Belgium. Two PEMs were mounted on the body of a human male subject and calibrated on-body in an anechoic chamber (non-diffuse) and a reverberation chamber (RC) (diffuse fields). This was motivated by the fact that electromagnetic waves in indoor environments have both specular and diffuse components. Both calibrations show that PEMs underestimate actual incident electromagnetic fields. This can be compensated by using an on-body response. Moreover, it is shown that these responses are different in anechoic chamber and RC. Therefore, it is advised to use an on-body calibration in an RC in future indoor PEM measurements where diffuse fields are present. Using the response averaged over two PEMs reduced measurement uncertainty compared to single PEMs. Following the calibration, measurements in a realistic indoor environment were done for wireless fidelity (WiFi-5G) band. Measured power density values are maximally 8.9 mW/m(2) and 165.8 μW/m(2) on average. These satisfy reference levels issued by the International Commission on Non-Ionizing Radiation Protection in 1998. Power density values obtained by applying on-body calibration in RC are higher than values obtained from no body calibration (only PEMs) and on-body calibration in anechoic room, by factors of 7.55 and 2.21, respectively. Bioelectromagnetics. 37:298-309, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Evaluation of a microwave resonator for predicting grain moisture independent of bulk density
USDA-ARS?s Scientific Manuscript database
This work evaluated the ability of a planar whispering mode resonator to predict moisture considering moisture and densities expected in an on-harvester application. A calibration model was developed to accurately predict moisture over the moisture, density and temperature ranges evaluated. This mod...
Calibration of Thomson scattering system on VEST
NASA Astrophysics Data System (ADS)
Kim, Y.-G.; Lee, J.-H.; Kim, D.; Yoo, M.-G.; Lee, H.; Hwang, Y. S.; Na, Y.-S.
2017-12-01
The Thomson scattering system has been recently installed on Versatile Experiment Spherical Torus (VEST) to measure the electron temperature and the density of the core plasmas. Since the calibration of the system is required for the accurate measurement of these parameters, a polychromator and the system efficiency are calibrated. The bias voltage of the detector is optimized and the relative responsivity of the polychromator is measured to analyse the spectral broadening. The tendency of decreasing responsivity because of the ambient temperature change is addressed together. The efficiencies of the alignments using HeNe laser and Nd:YAG laser are compared. After the alignment using Rayleigh scattering, it is improved ~ 7 times while the peak signal of the stray light is decreased. To evaluate the efficiencies of the alignment using HeNe laser, it is compared with the efficiency of the fine alignment by Rayleigh scattering. After absolute calibration is done, the Thomson scattering signal is estimated theoretically. The Bayesian analysis is tried using the synthetic data, and the results show that the input temperature and the density are inside the contour of the 90% confident level. The calibrated Thomson scattering system will provide the meaningful information of the core plasma of the VEST.
NASA Astrophysics Data System (ADS)
Hoadley, Keri; France, Kevin; Nell, Nicholas; Kane, Robert; Schultz, Ted; Beasley, Matthew; Green, James; Kulow, Jen; Kersgaard, Eliot; Fleming, Brian
2014-07-01
The Colorado High-resolution Echelle Stellar Spectrograph (CHESS) is a far ultraviolet (FUV) rocket-borne experiment designed to study the atomic-to-molecular transitions within translucent interstellar clouds. CHESS is an objective echelle spectrograph operating at f/12.4 and resolving power of 120,000 over a band pass of 100 - 160 nm. The echelle flight grating is the product of a research and development project with LightSmyth Inc. and was coated at Goddard Space Flight Center (GSFC) with Al+LiF. It has an empirically-determined groove density of 71.67 grooves/mm. At the Center for Astrophysics and Space Astronomy (CASA) at the University of Colorado (CU), we measured the efficiencies of the peak and adjacent dispersion orders throughout the 90 - 165 nm band pass to characterize the behavior of the grating for pre-flight calibrations and to assess the scattered-light behavior. The crossdispersing grating, developed and ruled by Horiba Jobin-Yvon, is a holographically-ruled, low line density (351 grooves/mm), powered optic with a toroidal surface curvature. The CHESS cross-disperser was also coated at GSFC; Cr+Al+LiF was deposited to enhance far-UV efficiency. Results from final efficiency and reflectivity measurements of both optics are presented. We utilize a cross-strip anode microchannel plate (MCP) detector built by Sensor Sciences to achieve high resolution (25 μm spatial resolution) and data collection rates (~ 106 photons/second) over a large format (40mm round, digitized to 8k x 8k) for the first time in an astronomical sounding rocket flight. The CHESS instrument was successfully launched from White Sands Missile Range on 24 May 2014. We present pre-flight sensitivity, effective area calculations, lab spectra and calibration results, and touch on first results and post-flight calibration plans.
Variability in Students' Evaluating Processes in Peer Assessment with Calibrated Peer Review
ERIC Educational Resources Information Center
Russell, J.; Van Horne, S.; Ward, A. S.; Bettis, E. A., III; Gikonyo, J.
2017-01-01
This study investigated students' evaluating process and their perceptions of peer assessment when they engaged in peer assessment using Calibrated Peer Review. Calibrated Peer Review is a web-based application that facilitates peer assessment of writing. One hundred and thirty-two students in an introductory environmental science course…
The Plasma Environment at Enceladus and Europa Compared
NASA Astrophysics Data System (ADS)
Rymer, Abigail; Persoon, Ann; Morooka, Michiko; Heuer, Steven; Westlake, Joseph H.
2017-10-01
The plasma environment near Enceladus is complex, as revealed during 16 encounters of the Cassini spacecraft. The well documented Enceladus plumes create a dusty, asymmetric exosphere in which electrons can attach to small ice particles - forming anions, and negatively charged nanograins and dust - to the extent that cations can be the lightest charged particles present and, as a result, the dominant current carriers. Several instruments on the Cassini spacecraft are able to measure this environment in both expected and unexpected ways. Cassini Plasma Spectrometer (CAPS) is designed and calibrated to measure the thermal plasma ions and electrons and also measures the energy/charge of charged nanograins when present. Cassini Radio Plasma Wave Sensor (RPWS) measures electron density as derived from the ‘upper hybrid frequency’ which is a function of the total free electron density and magnetic field strength and provides a vital ground truth measurement for Cassini calibration when the density is sufficiently high for it to be well measured. Cassini Langmuir Probe (LP) measures the electron density and temperature via direct current measurement, and both CAPS and LP can provide estimates for the spacecraft potential which we compare. The plasma environment near Europa is similarly complex and, although not so comprehensively equipped and hampered by the non-deployment of its high gain antenna, the Galileo spacecraft made similar measurements during 9 Europa flybys and recent observations have suggested that, like Enceladus, Europa might have active plume activity. We present a detailed comparison of data from the Cassini and Galileo sensors in order to assess the plasma environment observed by the different instruments, discuss what is consistent and otherwise, and the implications for the plasma environment at Enceladus and Europa in the context of work to date as well as implications for future studies.
Calibration of a Background Oriented Schlieren (BOS) Set-up
NASA Astrophysics Data System (ADS)
Porta, David; Echeverría, Carlos; Cardoso, Hiroki; Aguayo, Alejandro; Stern, Catalina
2014-11-01
We use two materials with different known indexes of refraction to calibrate a Background Oriented Schlieren (BOS) experimental set-up, and to validate the Lorenz-Lorentz equation. BOS is used in our experiments to determine local changes of density in the shock pattern of an axisymmetric supersonic air jet. It is important to validate, in particular, the Gladstone Dale approximation (index of refraction close to one) in our experimental conditions and determine the uncertainty of our density measurements. In some cases, the index of refraction of the material is well known, but in others the density is measured and related to the displacement field. We acknowledge support from UNAM through DGAPA PAPIIT IN117712 and the Graduate Program in Mechanical Engineering.
Rangeland biomass estimation demonstration. [Texas Experimenta Ranch
NASA Technical Reports Server (NTRS)
Newton, R. W. (Principal Investigator); Boyd, W. E.; Clark, B. V.
1982-01-01
Because of their sensitivity to chlorophyll density, green leaf density, and leaf water density, two hand-held radiometers which have sensor bands coinciding with thematic mapper bands 3, 4, and 5 were used to calibrate green biomass to LANDSAT spectral ratios as a step towards using portable radiometers to speed up ground data acquisition. Two field reflectance panels monitored incoming radiation concurrently with sampling. Software routines were developed and used to extract data from uncorrected tapes of MSS data provided in NASA LANDSAT universal format. A LANDSAT biomass calibration curve estimated the range biomass over a four scene area and displayed this information spatially as a product in a format of use to ranchers. The regional biomass contour map is discussed.
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall be... beam of light from the light source emanates, and the recorder response shall be noted. Filters with...
Busignies, Virginie; Leclerc, Bernard; Porion, Patrice; Evesque, Pierre; Couarraze, Guy; Tchoreloff, Pierre
2006-08-01
Direct compaction is a complex process that results in a density distribution inside the tablets which is often heterogeneous. Therefore, the density variations may affect the compact properties. A quantitative analysis of this phenomenon is still lacking. Recently, X-ray microtomography has been successfully used in pharmaceutical development to study qualitatively the impact of tablet shape and break-line in the density of pharmaceutical tablets. In this study, we evaluate the density profile in microcrystalline cellulose (Vivapur 12) compacts obtained at different mean porosity (ranging from 7.7% to 33.5%) using X-ray tomography technique. First, the validity of the Beer-Lambert law is studied. Then, density calibration is performed and density maps of cylindrical tablets are obtained and visualized using a process with colour-scale calibration plot which is explained. As expected, important heterogeneity in density is observed and quantified. The higher densities in peripheral region were particularly investigated and appraised in regard to the lower densities observed in the middle of the tablet. The results also underlined that in the case of pharmaceutical tablets, it is important to differentiate the mechanical properties representative of the total volume tablet and the mechanical properties that only characterize the tablet surface like the Brinell hardness measurements.
The upgrade of the Thomson scattering system for measurement on the C-2/C-2U devices.
Zhai, K; Schindler, T; Kinley, J; Deng, B; Thompson, M C
2016-11-01
The C-2/C-2U Thomson scattering system has been substantially upgraded during the latter phase of C-2/C-2U program. A Rayleigh channel has been added to each of the three polychromators of the C-2/C-2U Thomson scattering system. Onsite spectral calibration has been applied to avoid the issue of different channel responses at different spots on the photomultiplier tube surface. With the added Rayleigh channel, the absolute intensity response of the system is calibrated with Rayleigh scattering in argon gas from 0.1 to 4 Torr, where the Rayleigh scattering signal is comparable to the Thomson scattering signal at electron densities from 1 × 10 13 to 4 × 10 14 cm -3 . A new signal processing algorithm, using a maximum likelihood method and including detailed analysis of different noise contributions within the system, has been developed to obtain electron temperature and density profiles. The system setup, spectral and intensity calibration procedure and its outcome, data analysis, and the results of electron temperature/density profile measurements will be presented.
Fracture prediction and calibration of a Canadian FRAX® tool: a population-based report from CaMos
Fraser, L.-A.; Langsetmo, L.; Berger, C.; Ioannidis, G.; Goltzman, D.; Adachi, J. D.; Papaioannou, A.; Josse, R.; Kovacs, C. S.; Olszynski, W. P.; Towheed, T.; Hanley, D. A.; Kaiser, S. M.; Prior, J.; Jamal, S.; Kreiger, N.; Brown, J. P.; Johansson, H.; Oden, A.; McCloskey, E.; Kanis, J. A.
2016-01-01
Summary A new Canadian WHO fracture risk assessment (FRAX®) tool to predict 10-year fracture probability was compared with observed 10-year fracture outcomes in a large Canadian population-based study (CaMos). The Canadian FRAX tool showed good calibration and discrimination for both hip and major osteoporotic fractures. Introduction The purpose of this study was to validate a new Canadian WHO fracture risk assessment (FRAX®) tool in a prospective, population-based cohort, the Canadian Multi-centre Osteoporosis Study (CaMos). Methods A FRAX tool calibrated to the Canadian population was developed by the WHO Collaborating Centre for Metabolic Bone Diseases using national hip fracture and mortality data. Ten-year FRAX probabilities with and without bone mineral density (BMD) were derived for CaMos women (N=4,778) and men (N=1,919) and compared with observed fracture outcomes to 10 years (Kaplan–Meier method). Cox proportional hazard models were used to investigate the contribution of individual FRAX variables. Results Mean overall 10-year FRAX probability with BMD for major osteoporotic fractures was not significantly different from the observed value in men [predicted 5.4% vs. observed 6.4% (95%CI 5.2–7.5%)] and only slightly lower in women [predicted 10.8% vs. observed 12.0% (95%CI 11.0–12.9%)]. FRAX was well calibrated for hip fracture assessment in women [predicted 2.7% vs. observed 2.7% (95%CI 2.2–3.2%)] but underestimated risk in men [predicted 1.3% vs. observed 2.4% (95%CI 1.7–3.1%)]. FRAX with BMD showed better fracture discrimination than FRAX without BMD or BMD alone. Age, body mass index, prior fragility fracture and femoral neck BMD were significant independent predictors of major osteoporotic fractures; sex, age, prior fragility fracture and femoral neck BMD were significant independent predictors of hip fractures. Conclusion The Canadian FRAX tool provides predictions consistent with observed fracture rates in Canadian women and men, thereby providing a valuable tool for Canadian clinicians assessing patients at risk of fracture. PMID:21161508
Falcone, James A.; Carlisle, Daren M.; Weber, Lisa C.
2010-01-01
Characterizing the relative severity of human disturbance in watersheds is often part of stream assessments and is frequently done with the aid of Geographic Information System (GIS)-derived data. However, the choice of variables and how they are used to quantify disturbance are often subjective. In this study, we developed a number of disturbance indices by testing sets of variables, scoring methods, and weightings of 33 potential disturbance factors derived from readily available GIS data. The indices were calibrated using 770 watersheds located in the western United States for which the severity of disturbance had previously been classified from detailed local data by the United States Environmental Protection Agency (USEPA) Environmental Monitoring and Assessment Program (EMAP). The indices were calibrated by determining which variable or variable combinations and aggregation method best differentiated between least- and most-disturbed sites. Indices composed of several variables performed better than any individual variable, and best results came from a threshold method of scoring using six uncorrelated variables: housing unit density, road density, pesticide application, dam storage, land cover along a mainstem buffer, and distance to nearest canal/pipeline. The final index was validated with 192 withheld watersheds and correctly classified about two-thirds (68%) of least- and most-disturbed sites. These results provide information about the potential for using a disturbance index as a screening tool for a priori ranking of watersheds at a regional/national scale, and which landscape variables and methods of combination may be most helpful in doing so.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, L.; Ding, W. X.; Brower, D. L.
2010-10-15
Differential interferometry employs two parallel laser beams with a small spatial offset (less than beam width) and frequency difference (1-2 MHz) using common optics and a single mixer for a heterodyne detection. The differential approach allows measurement of the electron density gradient, its fluctuations, as well as the equilibrium density distribution. This novel interferometry technique is immune to fringe skip errors and is particularly useful in harsh plasma environments. Accurate calibration of the beam spatial offset, accomplished by use of a rotating dielectric wedge, is required to enable broad application of this approach. Differential interferometry has been successfully used onmore » the Madison Symmetric Torus reversed-field pinch plasma to directly measure fluctuation-induced transport along with equilibrium density profile evolution during pellet injection. In addition, by combining differential and conventional interferometry, both linear and nonlinear terms of the electron density fluctuation energy equation can be determined, thereby allowing quantitative investigation of the origin of the density fluctuations. The concept, calibration, and application of differential interferometry are presented.« less
Wang, X.; Chou, I-Ming; Hu, W.; Burruss, Robert; Sun, Q.; Song, Y.
2011-01-01
Raman spectroscopy is a powerful method for the determination of CO2 densities in fluid inclusions, especially for those with small size and/or low fluid density. The relationship between CO2 Fermi diad split (Δ, cm−1) and CO2 density (ρ, g/cm3) has been documented by several previous studies. However, significant discrepancies exist among these studies mainly because of inconsistent calibration procedures and lack of measurements for CO2fluids having densities between 0.21 and 0.75 g/cm3, where liquid and vapor phases coexist near room temperature.In this study, a high-pressure optical cell and fused silica capillary capsules were used to prepare pure CO2 samples with densities between 0.0472 and 1.0060 g/cm3. The measured CO2 Fermi diad splits were calibrated with two well established Raman bands of benzonitrile at 1192.6 and 1598.9 cm−1. The relationship between the CO2 Fermi diad split and density can be represented by: ρ = 47513.64243 − 1374.824414 × Δ + 13.25586152 × Δ2 − 0.04258891551 × Δ3(r2 = 0.99835, σ = 0.0253 g/cm3), and this relationship was tested by synthetic fluid inclusions and natural CO2-rich fluid inclusions. The effects of temperature and the presence of H2O and CH4 on this relationship were also examined.
NASA Astrophysics Data System (ADS)
Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.
2016-12-01
One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.
Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods
NASA Technical Reports Server (NTRS)
Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.
1990-01-01
Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modem video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.
Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods
NASA Technical Reports Server (NTRS)
Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.
1990-01-01
Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modern video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.
Self-Calibrating Wave-Encoded Variable-Density Single-Shot Fast Spin Echo Imaging.
Chen, Feiyu; Taviani, Valentina; Tamir, Jonathan I; Cheng, Joseph Y; Zhang, Tao; Song, Qiong; Hargreaves, Brian A; Pauly, John M; Vasanawala, Shreyas S
2018-04-01
It is highly desirable in clinical abdominal MR scans to accelerate single-shot fast spin echo (SSFSE) imaging and reduce blurring due to T 2 decay and partial-Fourier acquisition. To develop and investigate the clinical feasibility of wave-encoded variable-density SSFSE imaging for improved image quality and scan time reduction. Prospective controlled clinical trial. With Institutional Review Board approval and informed consent, the proposed method was assessed on 20 consecutive adult patients (10 male, 10 female, range, 24-84 years). A wave-encoded variable-density SSFSE sequence was developed for clinical 3.0T abdominal scans to enable high acceleration (3.5×) with full-Fourier acquisitions by: 1) introducing wave encoding with self-refocusing gradient waveforms to improve acquisition efficiency; 2) developing self-calibrated estimation of wave-encoding point-spread function and coil sensitivity to improve motion robustness; and 3) incorporating a parallel imaging and compressed sensing reconstruction to reconstruct highly accelerated datasets. Image quality was compared pairwise with standard Cartesian acquisition independently and blindly by two radiologists on a scale from -2 to 2 for noise, contrast, confidence, sharpness, and artifacts. The average ratio of scan time between these two approaches was also compared. A Wilcoxon signed-rank tests with a P value under 0.05 considered statistically significant. Wave-encoded variable-density SSFSE significantly reduced the perceived noise level and improved the sharpness of the abdominal wall and the kidneys compared with standard acquisition (mean scores 0.8, 1.2, and 0.8, respectively, P < 0.003). No significant difference was observed in relation to other features (P = 0.11). An average of 21% decrease in scan time was achieved using the proposed method. Wave-encoded variable-density sampling SSFSE achieves improved image quality with clinically relevant echo time and reduced scan time, thus providing a fast and robust approach for clinical SSFSE imaging. 1 Technical Efficacy: Stage 6 J. Magn. Reson. Imaging 2018;47:954-966. © 2017 International Society for Magnetic Resonance in Medicine.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
Absolute photometric calibration of IRAC: lessons learned using nine years of flight data
NASA Astrophysics Data System (ADS)
Carey, S.; Ingalls, J.; Hora, J.; Surace, J.; Glaccum, W.; Lowrance, P.; Krick, J.; Cole, D.; Laine, S.; Engelke, C.; Price, S.; Bohlin, R.; Gordon, K.
2012-09-01
Significant improvements in our understanding of various photometric effects have occurred in the more than nine years of flight operations of the Infrared Array Camera aboard the Spitzer Space Telescope. With the accumulation of calibration data, photometric variations that are intrinsic to the instrument can now be mapped with high fidelity. Using all existing data on calibration stars, the array location-dependent photometric correction (the variation of flux with position on the array) and the correction for intra-pixel sensitivity variation (pixel-phase) have been modeled simultaneously. Examination of the warm mission data enabled the characterization of the underlying form of the pixelphase variation in cryogenic data. In addition to the accumulation of calibration data, significant improvements in the calibration of the truth spectra of the calibrators has taken place. Using the work of Engelke et al. (2006), the KIII calibrators have no offset as compared to the AV calibrators, providing a second pillar of the calibration scheme. The current cryogenic calibration is better than 3% in an absolute sense, with most of the uncertainty still in the knowledge of the true flux densities of the primary calibrators. We present the final state of the cryogenic IRAC calibration and a comparison of the IRAC calibration to an independent calibration methodology using the HST primary calibrators.
Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D
2013-08-28
We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.
NASA Astrophysics Data System (ADS)
Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.
2013-08-01
We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
NASA Astrophysics Data System (ADS)
Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.
2015-11-01
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.
Uncertainty quantification in LES of channel flow
Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...
2016-07-12
Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less
THE KCAL VERA 22 GHz CALIBRATOR SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrov, L.; Honma, M.; Shibata, S. M., E-mail: Leonid.Petrov@lpetrov.net
2012-02-15
We observed a sample of 1536 sources with correlated flux densities brighter than 200 mJy at 8 GHz with the very long baseline interferometry (VLBI) array VLBI Exploration of Radio Astrometry at 22 GHz. One half of the target sources has been detected. The detection limit was around 200 mJy. We derived the correlated flux densities of 877 detected sources in three ranges of projected baseline lengths. The objective of these observations was to determine the suitability of given sources as phase calibrators for dual-beam and phase-referencing observations at high frequencies. Preliminary results indicate that the number of compact extragalacticmore » sources at 22 GHz brighter than a given correlated flux density level is two times less than that at 8 GHz.« less
NASA Astrophysics Data System (ADS)
Kumar, Anil; Kumar, Harish; Mandal, Goutam; Das, M. B.; Sharma, D. C.
The present paper discusses the establishment of traceability of reference grade hydrometers at National Physical Laboratory, India (NPLI). The reference grade hydrometers are calibrated and traceable to the primary solid density standard. The calibration has been done according to standard procedure based on Cuckow's Method and the reference grade hydrometers calibrated covers a wide range. The uncertainty of the reference grade hydrometers has been computed and corrections are also calculated for the scale readings, at which observations are taken.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Status of the prototype Pulsed Photonuclear Assessment (PPA) inspection system
NASA Astrophysics Data System (ADS)
Jones, James L.; Blackburn, Brandon W.; Norman, Daren R.; Watson, Scott M.; Haskell, Kevin J.; Johnson, James T.; Hunt, Alan W.; Harmon, Frank; Moss, Calvin
2007-08-01
The Idaho National Laboratory, in collaboration with Idaho State University's Idaho Accelerator Center and the Los Alamos National Laboratory, continues to develop the Pulsed Photonuclear Assessment (PPA) technique for shielded nuclear material detection in large volume configurations, such as cargo containers. In recent years, the Department of Homeland Security has supported the development of a prototype PPA cargo inspection system. This PPA system integrates novel neutron and gamma-ray detectors for nuclear material detection along with a complementary and unique gray scale, density mapping component for significant shield material detection. This paper will present the developmental status of the prototype system, its detection performance using several INL Calibration Pallets, and planned enhancements to further increase its nuclear material detection capability.
Calibration and evaluation of a nuclear density and moisture measuring apparatus.
DOT National Transportation Integrated Search
1963-11-01
The research objectives of this project were to investigate a new : method of in-place determination of soils densities and moisture levels : employing a nuclear physics principle of the gamma radiation function as : the measurement technique, with s...
Experimental power density distribution benchmark in the TRIGA Mark II reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snoj, L.; Stancar, Z.; Radulovic, V.
2012-07-01
In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less
D.J. Miller; K.M. Burnett
2007-01-01
We use regionally available digital elevation models and land-cover data, calibrated with ground- and photo-based landslide inventories, to produce spatially distributed estimates of shallow, translational landslide density (number/unit area) for the Oregon Coast Range. We resolve relationships between landslide density and forest cover. We account for topographic...
NASA Technical Reports Server (NTRS)
Deyoung, James A.; Klepczynski, William J.; Mckinley, Angela Davis; Powell, William M.; Mai, Phu V.; Hetzel, P.; Bauch, A.; Davis, J. A.; Pearce, P. R.; Baumont, Francoise S.
1995-01-01
The international transatlantic time and frequency transfer experiment was designed by participating laboratories and has been implemented during 1994 to test the international communications path involving a large number of transmitting stations. This paper will present empirically determined clock and time scale differences, time and frequency domain instabilities, and a representative power spectral density analysis. The experiments by the method of co-location which will allow absolute calibration of the participating laboratories have been performed. Absolute time differences and accuracy levels of this experiment will be assessed in the near future.
NATIONAL GEODATABASE OF TIDAL STREAM POWER RESOURCE IN USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T; Neary, Vincent S; Stewart, Kevin M
2012-01-01
A geodatabase of tidal constituents is developed to present the regional assessment of tidal stream power resource in the USA. Tidal currents are numerically modeled with the Regional Ocean Modeling System (ROMS) and calibrated with the available measurements of tidal current speeds and water level surfaces. The performance of the numerical model in predicting the tidal currents and water levels is assessed by an independent validation. The geodatabase is published on a public domain via a spatial database engine with interactive tools to select, query and download the data. Regions with the maximum average kinetic power density exceeding 500 W/m2more » (corresponding to a current speed of ~1 m/s), total surface area larger than 0.5 km2 and depth greater than 5 m are defined as hotspots and documented. The regional assessment indicates that the state of Alaska (AK) has the largest number of locations with considerably high kinetic power density, followed by, Maine (ME), Washington (WA), Oregon (OR), California (CA), New Hampshire (NH), Massachusetts (MA), New York (NY), New Jersey (NJ), North and South Carolina (NC, SC), Georgia (GA), and Florida (FL).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, E. E.; Sellers, T. A.; Lu, B.
Purpose: The Breast Imaging Reporting and Data System (BI-RADS) breast composition descriptors are used for standardized mammographic reporting and are assessed visually. This reporting is clinically relevant because breast composition can impact mammographic sensitivity and is a breast cancer risk factor. New techniques are presented and evaluated for generating automated BI-RADS breast composition descriptors using both raw and calibrated full field digital mammography (FFDM) image data.Methods: A matched case-control dataset with FFDM images was used to develop three automated measures for the BI-RADS breast composition descriptors. Histograms of each calibrated mammogram in the percent glandular (pg) representation were processed tomore » create the new BR{sub pg} measure. Two previously validated measures of breast density derived from calibrated and raw mammograms were converted to the new BR{sub vc} and BR{sub vr} measures, respectively. These three measures were compared with the radiologist-reported BI-RADS compositions assessments from the patient records. The authors used two optimization strategies with differential evolution to create these measures: method-1 used breast cancer status; and method-2 matched the reported BI-RADS descriptors. Weighted kappa (κ) analysis was used to assess the agreement between the new measures and the reported measures. Each measure's association with breast cancer was evaluated with odds ratios (ORs) adjusted for body mass index, breast area, and menopausal status. ORs were estimated as per unit increase with 95% confidence intervals.Results: The three BI-RADS measures generated by method-1 had κ between 0.25–0.34. These measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.87 (1.34, 2.59) for BR{sub pg}; (b) OR = 1.93 (1.36, 2.74) for BR{sub vc}; and (c) OR = 1.37 (1.05, 1.80) for BR{sub vr}. The measures generated by method-2 had κ between 0.42–0.45. Two of these measures were significantly associated with breast cancer status in the adjusted models: (a) OR = 1.95 (1.24, 3.09) for BR{sub pg}; (b) OR = 1.42 (0.87, 2.32) for BR{sub vc}; and (c) OR = 2.13 (1.22, 3.72) for BR{sub vr}. The radiologist-reported measures from the patient records showed a similar association, OR = 1.49 (0.99, 2.24), although only borderline statistically significant.Conclusions: A general framework was developed and validated for converting calibrated mammograms and continuous measures of breast density to fully automated approximations for the BI-RADS breast composition descriptors. The techniques are general and suitable for a broad range of clinical and research applications.« less
2012-08-07
sealed quartz ampoule under a mercury overpressure in a conventional clam-shell furnace . The reduction in the dislocation density has been studied as...46 2.6.4 Etch Pit Characterization . . . . . . . . . . . . . . . . . . . . . . . . 46 5 3 Furnace Setup and Calibration...Setup . . . . . . . . . . . . . . . . . . . . . . . 54 3.1.2 Furnace Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4 In Situ
The upgrade of the Thomson scattering system for measurement on the C-2/C-2U devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, K.; Schindler, T.; Kinley, J.
The C-2/C-2U Thomson scattering system has been substantially upgraded during the latter phase of C-2/C-2U program. A Rayleigh channel has been added to each of the three polychromators of the C-2/C-2U Thomson scattering system. Onsite spectral calibration has been applied to avoid the issue of different channel responses at different spots on the photomultiplier tube surface. With the added Rayleigh channel, the absolute intensity response of the system is calibrated with Rayleigh scattering in argon gas from 0.1 to 4 Torr, where the Rayleigh scattering signal is comparable to the Thomson scattering signal at electron densities from 1 × 10{supmore » 13} to 4 × 10{sup 14} cm{sup −3}. A new signal processing algorithm, using a maximum likelihood method and including detailed analysis of different noise contributions within the system, has been developed to obtain electron temperature and density profiles. The system setup, spectral and intensity calibration procedure and its outcome, data analysis, and the results of electron temperature/density profile measurements will be presented.« less
NASA Astrophysics Data System (ADS)
Bozhenkov, S. A.; Beurskens, M.; Dal Molin, A.; Fuchert, G.; Pasch, E.; Stoneking, M. R.; Hirsch, M.; Höfel, U.; Knauer, J.; Svensson, J.; Trimino Mora, H.; Wolf, R. C.
2017-10-01
The optimized stellarator Wendelstein 7-X started operation in December 2015 with a 10 week limiter campaign. Divertor experiments will begin in the second half of 2017. The W7-X Thomson scattering system is an essential diagnostic for electron density and temperature profiles. In this paper the Thomson scattering diagnostic is described in detail, including its design, calibration, data evaluation and first experimental results. Plans for further development are also presented. The W7-X Thomson system is a Nd:YAG setup with up to five lasers, two sets of light collection lenses viewing the entire plasma cross-section, fiber bundles and filter based polychromators. To reduce hardware costs, two or three scattering volumes are measured with a single polychromator. The relative spectral calibration is carried out with the aid of a broadband supercontinuum light source. The absolute calibration is performed by observing Raman scattering in nitrogen. The electron temperatures and densities are recovered by Bayesian modelling. In the first campaign, the diagnostic was equipped for 10 scattering volumes. It provided temperature profiles comparable to those measured using an electron cyclotron emission diagnostic and line integrated densities within 10% of those from a dispersion interferometer.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
NASA Astrophysics Data System (ADS)
Shah, A. K.; Boyd, O. S.; Sowers, T.; Thompson, E.
2017-12-01
Seismic hazard assessments depend on an accurate prediction of ground motion, which in turn depends on a base knowledge of three-dimensional variations in density, seismic velocity, and attenuation. We are building a National Crustal Model (NCM) using a physical theoretical foundation, 3-D geologic model, and measured data for calibration. An initial version of the NCM for the western U.S. is planned to be available in mid-2018 and for the remainder of the U.S. in 2019. The theoretical foundation of the NCM couples Biot-Gassmann theory for the porous composite with mineral physics calculations for the solid mineral matrix. The 3-D geologic model is defined through integration of results from a range of previous studies including maps of surficial porosity, surface and subsurface lithology, and the depths to bedrock and crystalline basement or seismic equivalent. The depths to bedrock and basement are estimated using well, seismic, and gravity data; in many cases these data are compiled by combining previous studies. Two parameters controlling how porosity changes with depth are assumed to be a function of lithology and calibrated using measured shear- and compressional-wave velocity and density profiles. Uncertainties in parameters derived from the model increase with depth and are dependent on the quantity and quality of input data sets. An interface to the model provides parameters needed for ground motion prediction equations in the Western U.S., including, for example, the time-averaged shear-wave velocity in the upper 30 meters (VS30) and the depths to 1.0 and 2.5 km/s shear-wave speeds (Z1.0 and Z2.5), which have a very rough correlation to the depths to bedrock and basement, as well as interpolated 3D models for use with various Urban Hazard Mapping strategies. We compare parameters needed for ground motion prediction equations including VS30, Z1.0, and Z2.5 between those derived from existing models, for example, 3-D velocity models for southern California available from the Southern California Earthquake Center, and those derived from the NCM and assess their ability to reduce the variance of observed ground motions.
Atmospheric drag model calibrations for spacecraft lifetime prediction
NASA Technical Reports Server (NTRS)
Binebrink, A. L.; Radomski, M. S.; Samii, M. V.
1989-01-01
Although solar activity prediction uncertainty normally dominates decay prediction error budget for near-Earth spacecraft, the effect of drag force modeling errors for given levels of solar activity needs to be considered. Two atmospheric density models, the modified Harris-Priester model and the Jacchia-Roberts model, to reproduce the decay histories of the Solar Mesosphere Explorer (SME) and Solar Maximum Mission (SMM) spacecraft in the 490- to 540-kilometer altitude range were analyzed. Historical solar activity data were used in the input to the density computations. For each spacecraft and atmospheric model, a drag scaling adjustment factor was determined for a high-solar-activity year, such that the observed annual decay in the mean semimajor axis was reproduced by an averaged variation-of-parameters (VOP) orbit propagation. The SME (SMM) calibration was performed using calendar year 1983 (1982). The resulting calibration factors differ by 20 to 40 percent from the predictions of the prelaunch ballistic coefficients. The orbit propagations for each spacecraft were extended to the middle of 1988 using the calibrated drag models. For the Jaccia-Roberts density model, the observed decay in the mean semimajor axis of SME (SMM) over the 4.5-year (5.5-year) predictive period was reproduced to within 1.5 (4.4) percent. The corresponding figure for the Harris-Priester model was 8.6 (20.6) percent. Detailed results and conclusions regarding the importance of accurate drag force modeling for lifetime predictions are presented.
On evaluating the robustness of spatial-proximity-based regionalization methods
NASA Astrophysics Data System (ADS)
Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles
2016-08-01
In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.
Thermospheric density and wind retrieval from Swarm observations
NASA Astrophysics Data System (ADS)
Visser, Pieter; Doornbos, Eelco; van den IJssel, Jose; Teixeira da Encarnação, João
2013-11-01
The three-satellite ESA Swarm mission aims at mapping the Earth's global geomagnetic field at unprecedented spatial and temporal resolution and precision. Swarm also aims at observing thermospheric density and possibly horizontal winds. Precise orbit determination (POD) and Thermospheric Density and Wind (TDW) chains form part of the Swarm Constellation and Application Facility (SCARF), which will provide the so-called Level 2 products. The POD and TDW chains generate the orbit, accelerometer calibration, and thermospheric density and wind Level 2 products. The POD and TDW chains have been tested with data from the CHAMP and GRACE missions, indicating that a 3D orbit precision of about 10 cm can be reached. In addition, POD allows to determine daily accelerometer bias and scale factor values with a precision of around 10-15 nm/s2 and 0.01-0.02, respectively, for the flight direction. With these accelerometer calibration parameter values, derived thermospheric density is consistent at the 9-11% level (standard deviation) with values predicted by models (taking into account that model values are 20-30% higher). The retrieval of crosswinds forms part of the processing chain, but will be challenging. The Swarm observations will be used for further developing and improving density and wind retrieval algorithms.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
2015-09-30
interpolation was used to estimate fin whale density in between the hydrophone locations , and the result plotted as a density image. This was repeated every 5...singing fin whale density throughout the year for the study location off Portugal. Color indicates whale density, with calibration scale at right; yellow...spots are hydrophone locations ; timeline at top indicates the time of year; circle at lower right is 1000 km 2 , the area used in the unit of whale
NASA Astrophysics Data System (ADS)
Peng, Yong; Li, Hongqiang; Shen, Chunlong; Guo, Shun; Zhou, Qi; Wang, Kehong
2017-06-01
The power density distribution of electron beam welding (EBW) is a key factor to reflect the beam quality. The beam quality test system was designed for the actual beam power density distribution of high-voltage EBW. After the analysis of characteristics and phase relationship between the deflection control signal and the acquisition signal, the Post-Trigger mode was proposed for the signal acquisition meanwhile the same external clock source was shared by the control signal and the sampling clock. The power density distribution of beam cross-section was reconstructed using one-dimensional signal that was processed by median filtering, twice signal segmentation and spatial scale calibration. The diameter of beam cross-section was defined by amplitude method and integral method respectively. The measured diameter of integral definition is bigger than that of amplitude definition, but for the ideal distribution the former is smaller than the latter. The measured distribution without symmetrical shape is not concentrated compared to Gaussian distribution.
On the importance of geological data for hydraulic tomography analysis: Laboratory sandbox study
NASA Astrophysics Data System (ADS)
Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2016-11-01
This paper investigates the importance of geological data in Hydraulic Tomography (HT) through sandbox experiments. In particular, four groundwater models with homogeneous geological units constructed with borehole data of varying accuracy are jointly calibrated with multiple pumping test data of two different pumping and observation densities. The results are compared to those from a geostatistical inverse model. Model calibration and validation performances are quantitatively assessed using drawdown scatterplots. We find that accurate and inaccurate geological models can be well calibrated, despite the estimated K values for the poor geological models being quite different from the actual values. Model validation results reveal that inaccurate geological models yield poor drawdown predictions, but using more calibration data improves its predictive capability. Moreover, model comparisons among a highly parameterized geostatistical and layer-based geological models show that, (1) as the number of pumping tests and monitoring locations are reduced, the performance gap between the approaches decreases, and (2) a simplified geological model with a fewer number of layers is more reliable than the one based on the wrong description of stratigraphy. Finally, using a geological model as prior information in geostatistical inverse models results in the preservation of geological features, especially in areas where drawdown data are not available. Overall, our sandbox results emphasize the importance of incorporating geological data in HT surveys when data from pumping tests is sparse. These findings have important implications for field applications of HT where well distances are large.
Rondeau, M; Rouleau, M
1981-06-01
Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.
In-flight calibration of mesospheric rocket plasma probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havnes, Ove; University Studies Svalbard; Hartquist, Thomas W.
Many effects and factors can influence the efficiency of a rocket plasma probe. These include payload charging, solar illumination, rocket payload orientation and rotation, and dust impact induced secondary charge production. As a consequence, considerable uncertainties can arise in the determination of the effective cross sections of plasma probes and measured electron and ion densities. We present a new method for calibrating mesospheric rocket plasma probes and obtaining reliable measurements of plasma densities. This method can be used if a payload also carries a probe for measuring the dust charge density. It is based on that a dust probe's effectivemore » cross section for measuring the charged component of dust normally is nearly equal to its geometric cross section, and it involves the comparison of variations in the dust charge density measured with the dust detector to the corresponding current variations measured with the electron and/or ion probes. In cases in which the dust charge density is significantly smaller than the electron density, the relation between plasma and dust charge density variations can be simplified and used to infer the effective cross sections of the plasma probes. We illustrate the utility of the method by analysing the data from a specific rocket flight of a payload containing both dust and electron probes.« less
In-flight calibration of mesospheric rocket plasma probes.
Havnes, Ove; Hartquist, Thomas W; Kassa, Meseret; Morfill, Gregor E
2011-07-01
Many effects and factors can influence the efficiency of a rocket plasma probe. These include payload charging, solar illumination, rocket payload orientation and rotation, and dust impact induced secondary charge production. As a consequence, considerable uncertainties can arise in the determination of the effective cross sections of plasma probes and measured electron and ion densities. We present a new method for calibrating mesospheric rocket plasma probes and obtaining reliable measurements of plasma densities. This method can be used if a payload also carries a probe for measuring the dust charge density. It is based on that a dust probe's effective cross section for measuring the charged component of dust normally is nearly equal to its geometric cross section, and it involves the comparison of variations in the dust charge density measured with the dust detector to the corresponding current variations measured with the electron and/or ion probes. In cases in which the dust charge density is significantly smaller than the electron density, the relation between plasma and dust charge density variations can be simplified and used to infer the effective cross sections of the plasma probes. We illustrate the utility of the method by analysing the data from a specific rocket flight of a payload containing both dust and electron probes.
NASA Astrophysics Data System (ADS)
Ben-Jaffel, Lotfi; Holberg, J. B.
2016-06-01
The data harvest from the Voyagers’ (V 1 and V 2) Ultraviolet Spectrometers (UVS) covers encounters with the outer planets, measurements of the heliosphere sky-background, and stellar spectrophotometry. Because their period of operation overlaps with many ultraviolet missions, the calibration of V1 and V2 UVS with other spectrometers is invaluable. Here we revisit the UVS calibration to assess the intriguing sensitivity enhancements of 243% (V1) and 156% (V2) proposed recently. Using the Lyα airglow from Saturn, observed in situ by both Voyagers, and remotely by International Ultraviolet Explorer (IUE), we match the Voyager values to IUE, taking into account the shape of the Saturn Lyα line observed with the Goddard High Resolution Spectrograph on board the Hubble Space Telescope. For all known ranges of the interplanetary hydrogen density, we show that the V1 and V2 UVS sensitivities cannot be enhanced by the amounts thus far proposed. The same diagnostic holds for distinct channels covering the diffuse He I 58.4 nm emission. Our prescription is to keep the original calibration of the Voyager UVS with a maximum uncertainty of 30%, making both instruments some of the most stable EUV/FUV spectrographs in the history of space exploration. In that frame, we reassess the excess Lyα emission detected by Voyager UVS deep in the heliosphere, to show its consistency with a heliospheric but not galactic origin. Our finding confirms results obtained nearly two decades ago—namely, the UVS discovery of the distortion of the heliosphere and the corresponding obliquity of the local interstellar magnetic field (˜ 40^\\circ from upwind) in the solar system neighborhood—without requiring any revision of the Voyager UVS calibration.
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Aaron, David; Thome, Kurtis
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can better understand their properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, satellite at-sensor radiance values were compared to those estimated by each independent team member to determine the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these commercially available high spatial resolution sensors' absolute calibration values.
Evaluating the importance of faecal sources in human-impacted waters.
Schoen, Mary E; Soller, Jeffrey A; Ashbolt, Nicholas J
2011-04-01
Quantitative microbial risk assessment (QMRA) was used to evaluate the relative contribution of faecal indicators and pathogens when a mixture of human sources impacts a recreational waterbody. The waterbody was assumed to be impacted with a mixture of secondary-treated disinfected municipal wastewater and untreated (or poorly treated) sewage, using Norovirus as the reference pathogen and enterococci as the reference faecal indicator. The contribution made by each source to the total waterbody volume, indicator density, pathogen density, and illness risk was estimated for a number of scenarios that accounted for pathogen and indicator inactivation based on the age of the effluent (source-to-receptor), possible sedimentation of microorganisms, and the addition of a non-pathogenic source of faecal indicators (such as old sediments or an animal population with low occurrence of human-infectious pathogens). The waterbody indicator density was held constant at 35 CFU 100 mL(-1) enterococci to compare results across scenarios. For the combinations evaluated, either the untreated sewage or the non-pathogenic source of faecal indicators dominated the recreational waterbody enterococci density assuming a culture method. In contrast, indicator density assayed by qPCR, pathogen density, and bather gastrointestinal illness risks were largely dominated by secondary disinfected municipal wastewater, with untreated sewage being increasingly less important as the faecal indicator load increased from a non-pathogenic source. The results support the use of a calibrated qPCR total enterococci indicator, compared to a culture-based assay, to index infectious human enteric viruses released in treated human wastewater, and illustrate that the source contributing the majority of risk in a mixture may be overlooked when only assessing faecal indicators by a culture-based method. Published by Elsevier Ltd.
Wöstheinrich, K; Schmidt, P C
2000-06-01
The instrumentation and validation of a laboratory-scale fluidized bed apparatus is described. For continuous control of the process, the apparatus is instrumented with sensors for temperature, relative humidity (RH), and air velocity. Conditions of inlet air, fluidizing air, product, and exhaust air were determined. The temperature sensors were calibrated at temperatures of 0.0 degree C and 99.9 degrees C. The calibration of the humidity sensors covered the range from 12% RH to 98% RH using saturated electrolyte solutions. The calibration of the anemometer took place in a wind tunnel at defined air velocities. The calibrations led to satisfying results concerning sensitivity and precision. To evaluate the reproducibility of the process, 15 granules were prepared under identical conditions. The influence of the type of pump used for delivering the granulating liquid was investigated. Particle size distribution, bulk density, and tapped density were determined. Granules were tableted on a rotary press at four different compression force levels, followed by determination of tablet properties such as weight, crushing strength, and disintegration time. The apparatus was found to produce granules with good reproducibility concerning the granule and tablet properties.
Reactive Burn Model Calibration for PETN Using Ultra-High-Speed Phase Contrast Imaging
NASA Astrophysics Data System (ADS)
Johnson, Carl; Ramos, Kyle; Bolme, Cindy; Sanchez, Nathaniel; Barber, John; Montgomery, David
2017-06-01
A 1D reactive burn model (RBM) calibration for a plastic bonded high explosive (HE) requires run-to-detonation data. In PETN (pentaerythritol tetranitrate, 1.65 g/cc) the shock to detonation transition (SDT) is on the order of a few millimeters. This rapid SDT imposes experimental length scales that preclude application of traditional calibration methods such as embedded electromagnetic gauge methods (EEGM) which are very effective when used to study 10 - 20 mm thick HE specimens. In recent work at Argonne National Laboratory's Advanced Photon Source we have obtained run-to-detonation data in PETN using ultra-high-speed dynamic phase contrast imaging (PCI). A reactive burn model calibration valid for 1D shock waves is obtained using density profiles spanning the transition to detonation as opposed to particle velocity profiles from EEGM. Particle swarm optimization (PSO) methods were used to operate the LANL hydrocode FLAG iteratively to refine SURF RBM parameters until a suitable parameter set attained. These methods will be presented along with model validation simulations. The novel method described is generally applicable to `sensitive' energetic materials particularly those with areal densities amenable to radiography.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis
1996-01-01
The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
Development of a Multileaf Collimator for Proton Therapy
2012-11-01
Hounsfield Units (HU) into density bins (of width 10 kg/m^3), we now define a unique density for each Hounsfield Unit . The density resolution is thus...patient basis given some knowledge about any implants they might have. 24 The calibration of CT Hounsfield unit to material type and density was...that region, resulting in a hot ring around the cold spot. It was determined that the Hounsfield unit values corresponding to the voxels in the cold
Prediction of Liquefaction Potential of Dredge Fill Sand by DCP and Dynamic Probing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Md. Jahangir; Azad, Abul Kalam; Rahman, Ziaur
2008-07-08
From many research it is proved that liquefaction potential of sand is function of mainly relative density and confining pressure. During routine site investigations, high-quality sampling and laboratory testing of sands are not feasible because of inevitable sample disturbance effects and budgetary constraints. On the other hand quality control of sand fill can be done by determining in situ density of sand in layer by layer which is expensive and time consuming. In this paper TRL DCP (Transportation Research Laboratory Dynamic Cone Penetration) and DPL (Dynamic Probing Light) are calibrated to predict the relative density of sand deposit. For thismore » purpose sand of known relative density is prepared in a calibration chamber which is a mild steel cylinder with diameter 0.5 m and height 1.0 m. Relative density of sand is varied by controlling height of fall and diameter of hole of sand discharge bowl. After filling, every time DPL and DCP tests are performed and for every blow the penetration of cone is recorded. N10 is then calculated from penetration records. Thus a database is compiled where N10 and relative densities are known. A correlation is made between N{sub 10} and relative density for two types of sand. A good correlation of N{sub 10} and relative density is found.« less
NASA Astrophysics Data System (ADS)
Weber, Isabell P.; Yun, Seok Hyun; Scarcelli, Giuliano; Franze, Kristian
2017-12-01
Cells in the central nervous system (CNS) respond to the stiffness of their environment. CNS tissue is mechanically highly heterogeneous, thus providing motile cells with region-specific mechanical signals. While CNS mechanics has been measured with a variety of techniques, reported values of tissue stiffness vary greatly, and the morphological structures underlying spatial changes in tissue stiffness remain poorly understood. We here exploited two complementary techniques, contact-based atomic force microscopy and contact-free Brillouin microscopy, to determine the mechanical properties of ruminant retinae, which are built up by different tissue layers. As in all vertebrate retinae, layers of high cell body densities (‘nuclear layers’) alternate with layers of low cell body densities (‘plexiform layers’). Different tissue layers varied significantly in their mechanical properties, with the photoreceptor layer being the stiffest region of the retina, and the inner plexiform layer belonging to the softest regions. As both techniques yielded similar results, our measurements allowed us to calibrate the Brillouin microscopy measurements and convert the Brillouin shift into a quantitative assessment of elastic tissue stiffness with optical resolution. Similar as in the mouse spinal cord and the developing Xenopus brain, we found a strong correlation between nuclear densities and tissue stiffness. Hence, the cellular composition of retinae appears to strongly contribute to local tissue stiffness, and Brillouin microscopy shows a great potential for the application in vivo to measure the mechanical properties of transparent tissues.
Absolute Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Stevens, Richard E.
2001-01-01
Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an absolute calibration source for LIF measurements, and the intrinsic error was evaluated.
Monitoring forest land from high altitude and from space
NASA Technical Reports Server (NTRS)
1971-01-01
Forest inventory, forest stress, and standardization and calibration studies are presented. These include microscale photointerpretation of forest and nonforest land classes, multiseasonal film densities for automated forest and nonforest land classification, trend and spread of bark beetle infestations from 1968 through 1971, aerial photography for determining optimum levels of stand density to reduce such infestations, use of airborne spectrometers and multispectral scanners for previsual detection of Ponderosa pine trees under stress from insects and diseases, establishment of an earth resources technology satellite test site in the Black Hills and the identification of natural resolution targets, detection of root disease impact on forest stands by sequential orbital and suborbital multispectral photography, and calibration of color aerial photography.
The Plasma Environment at Enceladus
NASA Astrophysics Data System (ADS)
Rymer, Abigail; Morooka, Michiko; Persoon, Ann
2016-10-01
The plasma environment near Enceladus is complex. The well documented Enceladus plumes create a dusty, asymmetric exosphere in which electrons can attach to small ice particles - forming anions, and negatively charged nanograins and dust - to the extent that cations can be the lightest charged particles present and, as a result, the dominant current carriers. Several instruments on the Cassini spacecraft are able to measure this environment in both expected and unexpected ways. Cassini Plasma Spectrometer (CAPS) is designed and calibrated to measure the thermal plasma ions and electrons and also measures the energy/charge of charged nanograins when present. Cassini Radio Plasma Wave Sensor (RPWS) measures electron density as derived from the 'upper hybrid frequency' which is a function of the total free electron density and magnetic field strength and provides a vital ground truth measurement for Cassini calibration when the density is sufficiently high for it to be well measured. Cassini Langmuir Probe (LP) measures the electron density and temperature via direct current measurement, and both CAPS and LP can provide estimates for the spacecraft potential which we compare. Cassini Magnetospheric Imaging Instrument (MIMI) directly measures energetic particles that are manifest in the CAPS measurements as penetrating background in this region and, while not particularly efficient ionisers, create sputtering and surface weathering of Enceladus surface, MIMI also measures energetic neutral atoms produced during the charge exchange interactions in and near the plumes.In this presentation we exploit two almost identical Cassini-Enceladus flybys 'E17' and 'E18' which took place in March/April 2012. We present a detailed comparison of data from these Cassini sensors in order to assess the plasma environment observed by the different instruments, discuss what is consistent and otherwise, and the implications for the plasma environment at Enceladus in the context of work to date as well as implications for future studies.
Saito, Masatoshi; Tsukihara, Masayoshi
2014-07-01
For accurate tissue inhomogeneity correction in radiotherapy treatment planning, the authors had previously proposed a novel conversion of the energy-subtracted CT number to an electron density (ΔHU-ρe conversion), which provides a single linear relationship between ΔHU and ρe over a wide ρe range. The purpose of this study is to address the limitations of the conversion method with respect to atomic number (Z) by elucidating the role of partial photon interactions in the ΔHU-ρe conversion process. The authors performed numerical analyses of the ΔHU-ρe conversion for 105 human body tissues, as listed in ICRU Report 46, and elementary substances with Z = 1-40. Total and partial attenuation coefficients for these materials were calculated using the XCOM photon cross section database. The effective x-ray energies used to calculate the attenuation were chosen to imitate a dual-source CT scanner operated at 80-140 kV/Sn under well-calibrated and poorly calibrated conditions. The accuracy of the resultant calibrated electron density,[Formula: see text], for the ICRU-46 body tissues fully satisfied the IPEM-81 tolerance levels in radiotherapy treatment planning. If a criterion of [Formula: see text]ρe - 1 is assumed to be within ± 2%, the predicted upper limit of Z applicable for the ΔHU-ρe conversion under the well-calibrated condition is Z = 27. In the case of the poorly calibrated condition, the upper limit of Z is approximately 16. The deviation from the ΔHU-ρe linearity for higher Z substances is mainly caused by the anomalous variation in the photoelectric-absorption component. Compensation among the three partial components of the photon interactions provides for sufficient linearity of the ΔHU-ρe conversion to be applicable for most human tissues even for poorly conditioned scans in which there exists a large variation of effective x-ray energies owing to beam-hardening effects arising from the mismatch between the sizes of the object and the calibration phantom.
Balance Calibration – A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mike Stears
2010-07-01
Paper Title: Balance Calibration – A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customer’s location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturer’s specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when themore » balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom spreadsheet. The spreadsheet uses measurement results, along with the manufacturer’s specifications, to assign a direct-read measurement uncertainty to the balance. The fact that the assigned uncertainty is a best-case uncertainty is discussed with the customer; the assigned uncertainty contains no allowance for contributions associated with the unknown weighing sample, such as density, static charges, magnetism, etc. The attendee will learn uncertainty considerations associated with balance calibrations along with one method for assigning an uncertainty to a balance used for non-comparison measurements.« less
System for characterizing semiconductor materials and photovoltaic devices through calibration
Sopori, Bhushan L.; Allen, Larry C.; Marshall, Craig; Murphy, Robert C.; Marshall, Todd
1998-01-01
A method and apparatus for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby.
System for characterizing semiconductor materials and photovoltaic devices through calibration
Sopori, B.L.; Allen, L.C.; Marshall, C.; Murphy, R.C.; Marshall, T.
1998-05-26
A method and apparatus are disclosed for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby. 44 figs.
Spectral irradiance calibration in the infrared. I - Ground-based and IRAS broadband calibrations
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell G.; Barlow, Michael J.; Deacon, John R.
1992-01-01
Absolutely calibrated versions of realistic model atmosphere calculations for Sirius and Vega by Kurucz (1991) are presented and used as a basis to offer a new absolute calibration of infrared broad and narrow filters. In-band fluxes for Vega are obtained and defined to be zero magnitude at all wavelengths shortward of 20 microns. Existing infrared photometry is used differentially to establish an absolute scale of the new Sirius model, yielding an angular diameter within 1 sigma of the mean determined interferometrically by Hanbury Brown et al. (1974). The use of Sirius as a primary infrared stellar standard beyond the 20 micron region is suggested. Isophotal wavelengths and monochromatic flux densities for both Vega and Sirius are tabulated.
The effect of density gradients on hydrometers
NASA Astrophysics Data System (ADS)
Heinonen, Martti; Sillanpää, Sampo
2003-05-01
Hydrometers are simple but effective instruments for measuring the density of liquids. In this work, we studied the effect of non-uniform density of liquid on a hydrometer reading. The effect induced by vertical temperature gradients was investigated theoretically and experimentally. A method for compensating for the effect mathematically was developed and tested with experimental data obtained with the MIKES hydrometer calibration system. In the tests, the method was found reliable. However, the reliability depends on the available information on the hydrometer dimensions and density gradients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nor, Mohd. Fazrul Hisyam Mohd.; Othman, Hafidzah; Abidin, Abd. Rashid Zainal
2009-07-07
This paper presents the density measurement of tridecane by using hydrostatic weighing system, which is currently practised in Density Laboratory of National Metrology Laboratory (NML), SIRIM Berhad. This system weighed the crystal sphere while the crystal sphere was immersed in the tridecane. The volume and mass in air of the crystal sphere were calibrated at KRISS, Korea. The uncertainties of volume and mass in air of the crystal sphere were 4 ppm and 0.3 ppm respectively.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
EM calibration based on Post OPC layout analysis
NASA Astrophysics Data System (ADS)
Sreedhar, Aswin; Kundu, Sandip
2010-03-01
Design for Manufacturability (DFM) involves changes to the design and CAD tools to help increase pattern printability and improve process control. Design for Reliability (DFR) performs the same to improve reliability of devices from failures such as Electromigration (EM), gate-oxide break down, hot carrier injection (HCI), Negative Bias Temperature Insatiability (NBTI) and mechanical stress effects. Electromigration (EM) occurs due to migration or displacement of atoms as a result of the movement of electrons through a conducting medium. The rate of migration determines the Mean Time to Failure (MTTF) which is modeled as a function of temperature and current density. The model itself is calibrated through failure analysis (FA) of parts that are deemed to have failed due to EM against design parameters such as linewidth. Reliability Verification (RV) of a design involves verifying that every conducting line in a design meets certain MTTF threshold. In order to perform RV, current density for each wire must be computed. Current itself is a function of the parasitics that are determined through RC extraction. The standard practice is to perform the RC extraction and current density calculation on drawn, pre-OPC layouts. If a wire fails to meet threshold for MTTF, it may be resized. Subsequently, mask preparation steps such as OPC and PSM introduce extra features such as SRAFs, jogs,hammerheads and serifs that change their resistance, capacitance and current density values. Hence, calibrating EM model based on pre-OPC layouts will lead to different results compared to post-OPC layouts. In this work, we compare EM model calibration and reliability check based on drawn layout versus predicted layout, where the drawn layout is pre-OPC layout and predicted layout is based on litho simulation of post-OPC layout. Results show significant divergence between these two approaches, making a case for methodology based on predicted layout.
Guillong, M.; Hametner, K.; Reusser, E.; Wilson, S.A.; Gunther, D.
2005-01-01
New glass reference materials GSA-1G, GSC-1G, GSD-1G and GSE-1G have been characterised using a prototype solid state laser ablation system capable of producing wavelengths of 193 nm, 213 nm and 266 nm. This system allowed comparison of the effects of different laser wavelengths under nearly identical ablation and ICP operating conditions. The wavelengths 213 nm and 266 nm were also used at higher energy densities to evaluate the influence of energy density on quantitative analysis. In addition, the glass reference materials were analysed using commercially available 266 nm Nd:YAG and 193 nm ArF excimer lasers. Laser ablation analysis was carried out using both single spot and scanning mode ablation. Using laser ablation ICP-MS, concentrations of fifty-eight elements were determined with external calibration to the NIST SRM 610 glass reference material. Instead of applying the more common internal standardisation procedure, the total concentration of all element oxide concentrations was normalised to 100%. Major element concentrations were compared with those determined by electron microprobe. In addition to NIST SRM 610 for external calibration, USGS BCR-2G was used as a more closely matrix-matched reference material in order to compare the effect of matrix-matched and non matrix-matched calibration on quantitative analysis. The results show that the various laser wavelengths and energy densities applied produced similar results, with the exception of scanning mode ablation at 266 nm without matrix-matched calibration where deviations up to 60% from the average were found. However, results acquired using a scanning mode with a matrix-matched calibration agreed with results obtained by spot analysis. The increased abundance of large particles produced when using a scanning ablation mode with NIST SRM 610, is responsible for elemental fractionation effects caused by incomplete vaporisation of large particles in the ICP.
Spectra of Th/Ar and U/Ne hollow cathode lamps for spectrograph calibration
NASA Astrophysics Data System (ADS)
Nave, Gillian; Shlosberg, Ariel; Kerber, Florian; Den Hartog, Elizabeth; Neureiter, Bianca
2018-01-01
Low-current Th/Ar hollow cathode lamps have long been used for calibration of astronomical spectrographs on ground-based telescopes. Thorium is an attractive element for calibration as it has a single isotope, has narrow spectral lines, and has a dense spectrum covering the whole of the visible region. However, the high density of the spectrum that makes it attractive for calibrating high-resolution spectrographs is a detriment for lower resolution spectrographs and this is not obvious by examination of existing linelists. In addition, recent changes in regulations regarding the handling of thorium have led to a degradation in the quality of Th/Ar calibration lamps, with contamination by molecular ThO lines that are strong enough to obscure the calibration lines of interest.We are pursuing two approaches to these problems. First, we have expanded and improved the NIST Standard Reference Database 161, "Spectrum of Th-Ar Hollow Cathode Lamps" to cover the region 272 nm to 5500 nm. Spectra of hollow cathode lamps at up to 3 different currents can now be displayed simultaneously. Interactive zooming and the ability to convolve any of the spectra with a Gaussian or uploaded instrument profile enable the user to see immediately what the spectrum would look like at the particular resolution of their spectrograph. Second, we have measured the spectrum of a recent, contaminated Th/Ar hollow cathode lamp using a high-resolution Echelle spectrograph (Madison Wisconsin) at a resolving power (R~ 250,000). This significantly exceeds the resolving power of most astronomical spectrographs and resolves many of the molecular lines of ThO. With these spectra we are measuring and calibrating the positions of these molecular lines in order to make them suitable for spectrograph calibration.In the near infrared region, U/Ne hollow cathode lamps give a higher density of calibration lines than Th/Ar lamps and will be implemented on the upgraded CRIRES+ spectrograph on ESO’s Very Large Telescope in Chile. A new atlas of the U/Ne spectrum as measured by CRIRES will be presented.
The development of an electrochemical technique for in situ calibrating of combustible gas detectors
NASA Technical Reports Server (NTRS)
Shumar, J. W.; Lantz, J. B.; Schubert, F. H.
1976-01-01
A program to determine the feasibility of performing in situ calibration of combustible gas detectors was successfully completed. Several possible techniques for performing the in situ calibration were proposed. The approach that showed the most promise involved the use of a miniature water vapor electrolysis cell for the generation of hydrogen within the flame arrestor of a combustible gas detector to be used for the purpose of calibrating the combustible gas detectors. A preliminary breadboard of the in situ calibration hardware was designed, fabricated and assembled. The breadboard equipment consisted of a commercially available combustible gas detector, modified to incorporate a water vapor electrolysis cell, and the instrumentation required for controlling the water vapor electrolysis and controlling and calibrating the combustible gas detector. The results showed that operation of the water vapor electrolysis at a given current density for a specific time period resulted in the attainment of a hydrogen concentration plateau within the flame arrestor of the combustible gas detector.
Beaver Colony Density Trends on the Chequamegon-Nicolet National Forest, 1987 – 2013
Ribic, Christine A.; Donner, Deahn M.; Beck, Albert J.; Reinecke, Sue; Eklund, Dan
2017-01-01
The North American beaver (Castor canadensis) is a managed species in the United States. In northern Wisconsin, as part of the state-wide beaver management program, the Chequamegon-Nicolet National Forest removes beavers from targeted trout streams on U.S. Forest Service lands. However, the success of this management program has not been evaluated. Targeted removals comprise only 3% of the annual beaver harvest, a level of effort that may not affect the beaver population. We used colony location data along Forest streams from 1987–2013 (Nicolet, northeast Wisconsin) and 1997–2013 (Chequamegon, northwest Wisconsin) to assess trends in beaver colony density on targeted trout streams compared to non-targeted streams. On the Chequamegon, colony density on non-targeted trout and non-trout streams did not change over time, while colony density on targeted trout streams declined and then stabilized. On the Nicolet, beaver colony density decreased on both non-targeted streams and targeted trout streams. However, colony density on targeted trout streams declined faster. The impact of targeted trapping was similar across the two sides of the Forest (60% reduction relative to non-targeted trout streams). Exploratory analyses of weather influences found that very dry conditions and severe winters were associated with transient reductions in beaver colony density on non-targeted streams on both sides of the Forest. Our findings may help land management agencies weigh more finely calibrated beaver control measures against continued large-scale removal programs. PMID:28081271
Beaver Colony Density Trends on the Chequamegon-Nicolet National Forest, 1987 - 2013.
Ribic, Christine A; Donner, Deahn M; Beck, Albert J; Rugg, David J; Reinecke, Sue; Eklund, Dan
2017-01-01
The North American beaver (Castor canadensis) is a managed species in the United States. In northern Wisconsin, as part of the state-wide beaver management program, the Chequamegon-Nicolet National Forest removes beavers from targeted trout streams on U.S. Forest Service lands. However, the success of this management program has not been evaluated. Targeted removals comprise only 3% of the annual beaver harvest, a level of effort that may not affect the beaver population. We used colony location data along Forest streams from 1987-2013 (Nicolet, northeast Wisconsin) and 1997-2013 (Chequamegon, northwest Wisconsin) to assess trends in beaver colony density on targeted trout streams compared to non-targeted streams. On the Chequamegon, colony density on non-targeted trout and non-trout streams did not change over time, while colony density on targeted trout streams declined and then stabilized. On the Nicolet, beaver colony density decreased on both non-targeted streams and targeted trout streams. However, colony density on targeted trout streams declined faster. The impact of targeted trapping was similar across the two sides of the Forest (60% reduction relative to non-targeted trout streams). Exploratory analyses of weather influences found that very dry conditions and severe winters were associated with transient reductions in beaver colony density on non-targeted streams on both sides of the Forest. Our findings may help land management agencies weigh more finely calibrated beaver control measures against continued large-scale removal programs.
Beaver colony density trends on the Chequamegon-Nicolet National Forest, 1987 – 2013
Ribic, Christine; Donner, Deahn M.; Beck, Albert J.; Rugg, David J.; Reinecke, Sue; Eklund, Dan
2017-01-01
The North American beaver (Castor canadensis) is a managed species in the United States. In northern Wisconsin, as part of the state-wide beaver management program, the Chequamegon-Nicolet National Forest removes beavers from targeted trout streams on U.S. Forest Service lands. However, the success of this management program has not been evaluated. Targeted removals comprise only 3% of the annual beaver harvest, a level of effort that may not affect the beaver population. We used colony location data along Forest streams from 1987–2013 (Nicolet, northeast Wisconsin) and 1997–2013 (Chequamegon, northwest Wisconsin) to assess trends in beaver colony density on targeted trout streams compared to non-targeted streams. On the Chequamegon, colony density on non-targeted trout and non-trout streams did not change over time, while colony density on targeted trout streams declined and then stabilized. On the Nicolet, beaver colony density decreased on both non-targeted streams and targeted trout streams. However, colony density on targeted trout streams declined faster. The impact of targeted trapping was similar across the two sides of the Forest (60% reduction relative to non-targeted trout streams). Exploratory analyses of weather influences found that very dry conditions and severe winters were associated with transient reductions in beaver colony density on non-targeted streams on both sides of the Forest. Our findings may help land management agencies weigh more finely calibrated beaver control measures against continued large-scale removal programs.
SENSIT.FOR: A program for sensitometric reduction
NASA Astrophysics Data System (ADS)
Maury, A.; Marchal, J.
1984-09-01
A FORTRAN program for sensitometric evaluation of processes involved in hypering astronomical plates was written. It contains subroutines for full or quick description of the operation being done; choice of type of sensitogram; creation of 16 subfiles in the scan; density filtering; correction for area; specular PDS to diffuse ISO density calibration; and fog correction.
EUV efficiency of a 6000-grooves per mm diffraction grating
NASA Technical Reports Server (NTRS)
Hurwitz, Mark; Bowyer, Stuart; Edelstein, Jerry; Harada, Tatsuo; Kita, Toshiaki
1990-01-01
In order to explore whether grooves ruled mechanically at a density of 6000 per mm can perform well at EUV wavelengths, a sample grating is measured with this density in an EUV calibration facility. Measurements are presented of the planar uniform line-space diffraction grating's efficiency and large-angle scattering.
LOFAR 150-MHz observations of the Boötes field: catalogue and source counts
NASA Astrophysics Data System (ADS)
Williams, W. L.; van Weeren, R. J.; Röttgering, H. J. A.; Best, P.; Dijkema, T. J.; de Gasperin, F.; Hardcastle, M. J.; Heald, G.; Prandoni, I.; Sabater, J.; Shimwell, T. W.; Tasse, C.; van Bemmel, I. M.; Brüggen, M.; Brunetti, G.; Conway, J. E.; Enßlin, T.; Engels, D.; Falcke, H.; Ferrari, C.; Haverkorn, M.; Jackson, N.; Jarvis, M. J.; Kapińska, A. D.; Mahony, E. K.; Miley, G. K.; Morabito, L. K.; Morganti, R.; Orrú, E.; Retana-Montenegro, E.; Sridhar, S. S.; Toribio, M. C.; White, G. J.; Wise, M. W.; Zwart, J. T. L.
2016-08-01
We present the first wide area (19 deg2), deep (≈120-150 μJy beam-1), high-resolution (5.6 × 7.4 arcsec) LOFAR High Band Antenna image of the Boötes field made at 130-169 MHz. This image is at least an order of magnitude deeper and 3-5 times higher in angular resolution than previously achieved for this field at low frequencies. The observations and data reduction, which includes full direction-dependent calibration, are described here. We present a radio source catalogue containing 6 276 sources detected over an area of 19 deg2, with a peak flux density threshold of 5σ. As the first thorough test of the facet calibration strategy, introduced by van Weeren et al., we investigate the flux and positional accuracy of the catalogue. We present differential source counts that reach an order of magnitude deeper in flux density than previously achieved at these low frequencies, and show flattening at 150-MHz flux densities below 10 mJy associated with the rise of the low flux density star-forming galaxies and radio-quiet AGN.
ERIC Educational Resources Information Center
Oliveri, Maria Elena; von Davier, Matthias
2014-01-01
In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…
NASA Technical Reports Server (NTRS)
Kim, Edward
2011-01-01
Passive microwave remote sensing at L-band (1.4 GHz) is sensitive to soil moisture and sea surface salinity, both important climate variables. Science studies involving these variables can now take advantage of new satellite L-band observations. The first mission with regular global passive microwave observations at L-band is the European Space Agency's Soil Moisture and Ocean Salinity (SMOS), launched November, 2009. A second mission, NASA's Aquarius, was launched June, 201 I. A third mission, NASA's Soil Moisture Active Passive (SMAP) is scheduled to launch in 2014. Together, these three missions may provide a decade-long data record-provided that they are intercalibrated. The intercalibration is best performed at the radiance (brightness temperature) level, and Antarctica is proving to be a key calibration target. However, Antarctica has thus far not been fully characterized as a potential target. This paper will present evaluations of Antarctica as a microwave calibration target for the above satellite missions. Preliminary analyses have identified likely target areas, such as the vicinity of Dome-C and larger areas within East Antarctica. Physical sources of temporal and spatial variability of polar firn are key to assessing calibration uncertainty. These sources include spatial variability of accumulation rate, compaction, surface characteristics (dunes, micro-topography), wind patterns, and vertical profiles of density and temperature. Using primarily SMOS data, variability is being empirically characterized and attempts are being made to attribute observed variability to physical sources. One expected outcome of these studies is the potential discovery of techniques for remotely sensing--over all of Antarctica-parameters such as surface temperature.
An Evaluation of Antarctica as a Calibration Target for Passive Microwave Satellite Missions
NASA Technical Reports Server (NTRS)
Kim, Edward
2012-01-01
Passive microwave remote sensing at L-band (1.4 GHz) is sensitive to soil moisture and sea surface salinity, both important climate variables. Science studies involving these variables can now take advantage of new satellite L-band observations. The first mission with regular global passive microwave observations at L-band is the European Space Agency's Soil Moisture and Ocean Salinity (SMOS), launched November, 2009. A second mission, NASA's Aquarius, was launched June, 201l. A third mission, NASA's Soil Moisture Active Passive (SMAP) is scheduled to launch in 2014. Together, these three missions may provide a decade-long data record -- provided that they are intercalibrated. The intercalibration is best performed at the radiance (brightness temperature) level, and Antarctica is proving to be a key calibration target. However, Antarctica has thus far not been fully characterized as a potential target. This paper will present evaluations of Antarctica as a microwave calibration target for the above satellite missions. Preliminary analyses have identified likely target areas, such as the vicinity of Dome-C and larger areas within East Antarctica. Physical sources of temporal and spatial variability of polar firn are key to assessing calibration uncertainty. These sources include spatial variability of accumulation rate, compaction, surface characteristics (dunes, micro-topography), wind patterns, and vertical profiles of density and temperature. Using primarily SMOS data, variability is being empirically characterized and attempts are being made to attribute observed variability to physical sources. One expected outcome of these studies is the potential discovery of techniques for remotely sensing--over all of Antarctica--parameters such as surface temperature.
Temperature corrected-calibration of GRACE's accelerometer
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.
2017-12-01
Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.
Absolute calibration of neutron detectors on the C-2U advanced beam-driven FRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magee, R. M., E-mail: rmagee@trialphaenergy.com; Clary, R.; Korepanov, S.
2016-11-15
In the C-2U fusion energy experiment, high power neutral beam injection creates a large fast ion population that sustains a field-reversed configuration (FRC) plasma. The diagnosis of the fast ion pressure in these high-performance plasmas is therefore critical, and the measurement of the flux of neutrons from the deuterium-deuterium (D-D) fusion reaction is well suited to the task. Here we describe the absolute, in situ calibration of scintillation neutron detectors via two independent methods: firing deuterium beams into a high density gas target and calibration with a 2 × 10{sup 7} n/s AmBe source. The practical issues of each methodmore » are discussed and the resulting calibration factors are shown to be in good agreement. Finally, the calibration factor is applied to C-2U experimental data where the measured neutron rate is found to exceed the classical expectation.« less
Absolute calibration of neutron detectors on the C-2U advanced beam-driven FRC.
Magee, R M; Clary, R; Korepanov, S; Jauregui, F; Allfrey, I; Garate, E; Valentine, T; Smirnov, A
2016-11-01
In the C-2U fusion energy experiment, high power neutral beam injection creates a large fast ion population that sustains a field-reversed configuration (FRC) plasma. The diagnosis of the fast ion pressure in these high-performance plasmas is therefore critical, and the measurement of the flux of neutrons from the deuterium-deuterium (D-D) fusion reaction is well suited to the task. Here we describe the absolute, in situ calibration of scintillation neutron detectors via two independent methods: firing deuterium beams into a high density gas target and calibration with a 2 × 10 7 n/s AmBe source. The practical issues of each method are discussed and the resulting calibration factors are shown to be in good agreement. Finally, the calibration factor is applied to C-2U experimental data where the measured neutron rate is found to exceed the classical expectation.
The Evaluation of HOMER as a Marine Corps Expeditionary Energy Predeployment Tool
2010-09-01
experiment was used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...Brigade–Afghanistan xvi MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER
The Evaluation of HOMER as a Marine Corps Expeditionary Energy Pre-deployment Tool
2010-11-21
used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and Space Administration...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER
Calibration of a High Resolution X-ray Spectrometer for High-Energy-Density Plasmas on NIF
NASA Astrophysics Data System (ADS)
Kraus, B.; Gao, L.; Hill, K. W.; Bitter, M.; Efthimion, P.; Schneider, M. B.; Chen, H.; Ayers, J.; Beiersdorfer, P.; Liedahl, D.; Macphee, A. G.; Thorn, D. B.; Bettencourt, R.; Kauffman, R.; Le, H.; Nelson, D.
2017-10-01
A high-resolution, DIM-based (Diagnostic Instrument Manipulator) x-ray crystal spectrometer has been calibrated for and deployed at the National Ignition Facility (NIF) to diagnose plasma conditions and mix in ignition capsules near stagnation times. Two conical crystals in the Hall geometry focus rays from the Kr He- α, Ly- α, and He- β complexes onto a streak camera for time-resolved spectra, in order to measure electron density and temperature by observing Stark broadening and relative intensities of dielectronic satellites. Signals from these two crystals are correlated with a third crystal that time-integrates the intervening energy range. The spectrometer has been absolutely calibrated using a microfocus x-ray source, an array of CCD and single-photon-counting detectors, and K- and L-absorption edge filters. Measurements of the integrated reflectivity, energy range, and energy resolution for each crystal will be presented. The implications of the calibration on signal levels from NIF implosions and x-ray filter choices will be discussed. This work was performed under the auspices of the U.S. DoE by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466 and by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Freedman, Laurence S; Commins, John M; Willett, Walter; Tinker, Lesley F; Spiegelman, Donna; Rhodes, Donna; Potischman, Nancy; Neuhouser, Marian L; Moshfegh, Alanna J; Kipnis, Victor; Baer, David J; Arab, Lenore; Prentice, Ross L; Subar, Amy F
2017-07-01
Calibrating dietary self-report instruments is recommended as a way to adjust for measurement error when estimating diet-disease associations. Because biomarkers available for calibration are limited, most investigators use self-reports (e.g., 24-hour recalls (24HRs)) as the reference instrument. We evaluated the performance of 24HRs as reference instruments for calibrating food frequency questionnaires (FFQs), using data from the Validation Studies Pooling Project, comprising 5 large validation studies using recovery biomarkers. Using 24HRs as reference instruments, we estimated attenuation factors, correlations with truth, and calibration equations for FFQ-reported intakes of energy and for protein, potassium, and sodium and their densities, and we compared them with values derived using biomarkers. Based on 24HRs, FFQ attenuation factors were substantially overestimated for energy and sodium intakes, less for protein and potassium, and minimally for nutrient densities. FFQ correlations with truth, based on 24HRs, were substantially overestimated for all dietary components. Calibration equations did not capture dependencies on body mass index. We also compared predicted bias in estimated relative risks adjusted using 24HRs as reference instruments with bias when making no adjustment. In disease models with energy and 1 or more nutrient intakes, predicted bias in estimated nutrient relative risks was reduced on average, but bias in the energy risk coefficient was unchanged. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Improving gross count gamma-ray logging in uranium mining with the NGRS probe
NASA Astrophysics Data System (ADS)
Carasco, C.; Pérot, B.; Ma, J.-L.; Toubon, H.; Dubille-Auchère, A.
2018-01-01
AREVA Mines and the Nuclear Measurement Laboratory of CEA Cadarache are collaborating to improve the sensitivity and precision of uranium concentration measurement by means of gamma ray logging. The determination of uranium concentration in boreholes is performed with the Natural Gamma Ray Sonde (NGRS) based on a NaI(Tl) scintillation detector. The total gamma count rate is converted into uranium concentration using a calibration coefficient measured in concrete blocks with known uranium concentration in the AREVA Mines calibration facility located in Bessines, France. Until now, to take into account gamma attenuation in a variety of boreholes diameters, tubing materials, diameters and thicknesses, filling fluid densities and compositions, a semi-empirical formula was used to correct the calibration coefficient measured in Bessines facility. In this work, we propose to use Monte Carlo simulations to improve gamma attenuation corrections. To this purpose, the NGRS probe and the calibration measurements in the standard concrete blocks have been modeled with MCNP computer code. The calibration coefficient determined by simulation, 5.3 s-1.ppmU-1 ± 10%, is in good agreement with the one measured in Bessines, 5.2 s-1.ppmU-1. Based on the validated MCNP model, several parametric studies have been performed. For instance, the rock density and chemical composition proved to have a limited impact on the calibration coefficient. However, gamma self-absorption in uranium leads to a nonlinear relationship between count rate and uranium concentration beyond approximately 1% of uranium weight fraction, the underestimation of the uranium content reaching more than a factor 2.5 for a 50 % uranium weight fraction. Next steps will concern parametric studies with different tubing materials, diameters and thicknesses, as well as different borehole filling fluids representative of real measurement conditions.
Calibration of an Item Bank for the Assessment of Basque Language Knowledge
ERIC Educational Resources Information Center
Lopez-Cuadrado, Javier; Perez, Tomas A.; Vadillo, Jose A.; Gutierrez, Julian
2010-01-01
The main requisite for a functional computerized adaptive testing system is the need of a calibrated item bank. This text presents the tasks carried out during the calibration of an item bank for assessing knowledge of Basque language. It has been done in terms of the 3-parameter logistic model provided by the item response theory. Besides, this…
Assessment of MODIS On-Orbit Calibration Using a Deep Convective Cloud Technique
NASA Technical Reports Server (NTRS)
Mu, Qiaozhen; Wu, Aisheng; Chang, Tiejun; Angal, Amit; Link, Daniel; Xiong, Xiaoxiong; Doelling, David R.; Bhatt, Rajendra
2016-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) sensors onboard Terra and Aqua satellites are calibrated on-orbit with a solar diffuser (SD) for the reflective solar bands (RSB). The MODIS sensors are operating beyond their designed lifetime and hence present a major challenge to maintain the calibration accuracy. The degradation of the onboard SD is tracked by a solar diffuser stability monitor (SDSM) over a wavelength range from 0.41 to 0.94 micrometers. Therefore, any degradation of the SD beyond 0.94 micrometers cannot be captured by the SDSM. The uncharacterized degradation at wavelengths beyond this limit could adversely affect the Level 1B (L1B) product. To reduce the calibration uncertainties caused by the SD degradation, invariant Earth-scene targets are used to monitor and calibrate the MODIS L1B product. The use of deep convective clouds (DCCs) is one such method and particularly significant for the short-wave infrared (SWIR) bands in assessing their long-term calibration stability. In this study, we use the DCC technique to assess the performance of the Terra and Aqua MODIS Collection-6 L1B for RSB 1 3- 7, and 26, with spectral coverage from 0.47 to 2.13 micrometers. Results show relatively stable trends in Terra and Aqua MODIS reflectance for most bands. Careful attention needs to be paid to Aqua band 1, Terra bands 3 and 26 as their trends are larger than 1% during the study time period. We check the feasibility of using the DCC technique to assess the stability in MODIS bands 17-19. The assessment test on response versus scan angle (RVS) calibration shows substantial trend difference for Aqua band 1between different angles of incidence (AOIs). The DCC technique can be used to improve the RVS calibration in the future.
Ma, Jinhui; Siminoski, Kerry; Alos, Nathalie; Halton, Jacqueline; Ho, Josephine; Lentle, Brian; Matzinger, MaryAnn; Shenouda, Nazih; Atkinson, Stephanie; Barr, Ronald; Cabral, David A; Couch, Robert; Cummings, Elizabeth A; Fernandez, Conrad V; Grant, Ronald M; Rodd, Celia; Sbrocchi, Anne Marie; Scharke, Maya; Rauch, Frank; Ward, Leanne M
2015-03-01
Our objectives were to assess the magnitude of the disparity in lumbar spine bone mineral density (LSBMD) Z-scores generated by different reference databases and to evaluate whether the relationship between LSBMD Z-scores and vertebral fractures (VF) varies by choice of database. Children with leukemia underwent LSBMD by cross-calibrated dual-energy x-ray absorptiometry, with Z-scores generated according to Hologic and Lunar databases. VF were assessed by the Genant method on spine radiographs. Logistic regression was used to assess the association between fractures and LSBMD Z-scores. Net reclassification improvement and area under the receiver operating characteristic curve were calculated to assess the predictive accuracy of LSBMD Z-scores for VF. For the 186 children from 0 to 18 years of age, 6 different age ranges were studied. The Z-scores generated for the 0 to 18 group were highly correlated (r ≥ 0.90), but the proportion of children with LSBMD Z-scores ≤-2.0 among those with VF varied substantially (from 38-66%). Odds ratios (OR) for the association between LSBMD Z-score and VF were similar regardless of database (OR = 1.92, 95% confidence interval 1.44, 2.56 to OR = 2.70, 95% confidence interval 1.70, 4.28). Area under the receiver operating characteristic curve and net reclassification improvement ranged from 0.71 to 0.75 and -0.15 to 0.07, respectively. Although the use of a LSBMD Z-score threshold as part of the definition of osteoporosis in a child with VF does not appear valid, the study of relationships between BMD and VF is valid regardless of the BMD database that is used.
Saito, Masatoshi; Sagara, Shota
2017-06-01
The main objective of this study is to propose a simple formulation (which we called DEEDZ) for deriving effective atomic numbers (Z eff ) via electron density (ρ e ) calibration from dual-energy (DE) CT data. We carried out numerical analysis of this DEEDZ method for a large variety of materials with known elemental compositions and mass densities using an available photon cross sections database. The new conversion approach was also applied to previously published experimental DECT data to validate its practical feasibility. We performed numerical analysis of the DEEDZ conversion method for tissue surrogates that have the same chemical compositions and mass densities as a commercial tissue-characterization phantom in order to determine the parameters necessary for the ρ e and Z eff calibrations in the DEEDZ conversion. These parameters were then applied to the human-body-equivalent tissues of ICRU Report 46 as objects of interest with unknown ρ e and Z eff . The attenuation coefficients of these materials were calculated using the XCOM photon cross sections database. We also applied the DEEDZ conversion to experimental DECT data available in the literature, which was measured for two commercial phantoms of different shapes and sizes using a dual-source CT scanner at 80 kV and 140 kV/Sn. The simulated Z eff 's were in excellent agreement with the reference values for almost all of the ICRU-46 human tissues over the Z eff range from 5.83 (gallstones-cholesterol) to 16.11 (bone mineral-hydroxyapatite). The relative deviations from the reference Z eff were within ± 0.3% for all materials, except for one outlier that presented a -3.1% deviation, namely, the thyroid. The reason for this discrepancy is that the thyroid contains a small amount of iodine, an element with a large atomic number (Z = 53). In the experimental case, we confirmed that the simple formulation with less fit parameters enable to calibrate Z eff as accurately as the existing calibration procedure. The DEEDZ conversion method based on the simple formulation proposed could facilitate the construction of ρ e and Z eff images from acquired DECT data. © 2017 American Association of Physicists in Medicine.
High-density volatiles in the system C-O-H-N for the calibration of a laser Raman microprobe
Chou, I.-Ming; Pasteris, J.D.; Seitz, J.C.
1990-01-01
Three methods have been used to produce high-density volatiles in the system C-O-H-N for the calibration of a laser Raman microprobe (LRM): synthetic fluid-inclusion, sealed fused-quartz-tube, and high-pressure-cell methods. Because quantitative interpretation of a Raman spectrum of mixed-volatile fluid inclusions requires accurate knowledge of pressure- and composition-sensitive Raman scattering efficiencies or quantification factors for each species, calibrations of these parameters for mixtures of volatiles of known composition and pressure are necessary. Two advantages of the synthetic fluid-inclusion method are that the inclusions can be used readily in complementary microthermometry (MT) studies and that they have sizes and optical properties like those in natural samples. Some disadvantages are that producing H2O-free volatile mixtures is difficult, the composition may vary from one inclusion to another, the exact composition and density of the inclusions are difficult to obtain, and the experimental procedures are complicated. The primary advantage of the method using sealed fused-quartz tubes is its simplicity. Some disadvantages are that exact compositions for complex volatile mixtures are difficult to predict, densities can be approximated only, and complementary MT studies on the tubes are difficult to conduct. The advantages of the high-pressure-cell method are that specific, known compositions of volatile mixtures can be produced and that their pressures can be varied easily and are monitored during calibration. Some disadvantages are that complementary MT analysis is impossible, and the setup is bulky. Among the three methods for the calibration of an LRM, the high-pressure-cell method is the most reliable and convenient for control of composition and total pressure. We have used the high-pressure cell to obtain preliminary data on 1. (1) the ratio of the Raman quantification factors for CH4 and N2 in an equimolar CH4N2 mixture and 2. (2) the spectral peak position of ??1 of CH4 in that mixture, as well as in pure CH4, at pressures up to 690 bars. These data were successfully applied to natural inclusions from the Duluth Complex in order to derive their compositions and total pressures. ?? 1990.
Assessment of opacimeter calibration according to International Standard Organization 10155.
Gomes, J F
2001-01-01
This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.
Comparison of Satellite based Ion Density Measurements with Digisonde electron density measurements
NASA Astrophysics Data System (ADS)
Wilson, G.; Balthazor, R. L.; Reinisch, B. W.; McHarg, M.; Maldonado, C.
2017-12-01
The integrated Miniaturized Electrostatic Analyzer (IMESA) flying on the STPSat-3 satellite has collected more than 3 years of ion density data. This instrument is the first in a constellation of up to 6 instruments. We plan on integrating the data from all IMESAs into an approiate ionospheric model. OUr first step is to validate the IMESA data and calibrate the instrument. In this presentation we discuss our process for preparing IMESA data and comparing it to ground based measurements. Lastly, we present a number of comparisons between IMESA ion density measurements and digisonde electron density measurements.
Cassini Ion Mass Spectrometer Peak Calibrations from Statistical Analysis of Flight Data
NASA Astrophysics Data System (ADS)
Woodson, A. K.; Johnson, R. E.
2017-12-01
The Cassini Ion Mass Spectrometer (IMS) is an actuating time-of-flight (TOF) instrument capable of resolving ion mass, energy, and trajectory over a field of view that captures nearly the entire sky. One of three instruments composing the Cassini Plasma Spectrometer, IMS sampled plasma throughout the Kronian magnetosphere from 2004 through 2012 when it was permanently disabled due to an electrical malfunction. Initial calibration of the flight instrument at Southwest Research Institute (SwRI) was limited to a handful of ions and energies due to time constraints, with only about 30% of planned measurements carried out prior to launch. Further calibration measurements were subsequently carried out after launch at SwRI and Goddard Space Flight Center using the instrument prototype and engineering model, respectively. However, logistical differences among the three calibration efforts raise doubts as to how accurately the post-launch calibrations describe the behavior of the flight instrument. Indeed, derived peak parameters for some ion species differ significantly from one calibration to the next. In this study we instead perform a statistical analysis on 8 years of flight data in order to extract ion peak parameters that depend only on the response of the flight instrument itself. This is accomplished by first sorting the TOF spectra based on their apparent compositional similarities (e.g. primarily water group ions, primarily hydrocarbon ions, etc.) and normalizing each spectrum. The sorted, normalized data are then binned according to TOF, energy, and counts in order to generate energy-dependent probability density maps of each ion peak contour. Finally, by using these density maps to constrain a stochastic peak fitting algorithm we extract confidence intervals for the model parameters associated with various measured ion peaks, establishing a logistics-independent calibration of the body of IMS data gathered over the course of the Cassini mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, J; Penfold, S; Royal Adelaide Hospital, Adelaide, SA
2015-06-15
Purpose: To investigate the robustness of dual energy CT (DECT) and single energy CT (SECT) proton stopping power calibration techniques and quantify the associated errors when imaging a phantom differing in chemical composition to that used during stopping power calibration. Methods: The CIRS tissue substitute phantom was scanned in a CT-simulator at 90kV and 140kV. This image set was used to generate a DECT proton SPR calibration based on a relationship between effective atomic number and mean excitation energy. A SECT proton SPR calibration based only on Hounsfield units (HUs) was also generated. DECT and SECT scans of a secondmore » phantom of known density and chemical composition were performed. The SPR of the second phantom was calculated with the DECT approach (SPR-DECT),the SECT approach (SPR-SECT) and finally the known density and chemical composition of the phantom (SPR-ref). The DECT and SECT image sets were imported into the Pinnacle{sup 3} research release of proton therapy treatment planning. The difference in dose when exposed to a common pencil beam distribution was investigated. Results: SPR-DECT was found to be in better agreement with SPR-ref than SPR- SECT. The mean difference in SPR for all materials was 0.51% for DECT and 6.89% for SECT. With the exception of Teflon, SPR-DECT was found to agree with SPR-ref to within 1%. Significant differences in calculated dose were found when using the DECT image set or the SECT image set. Conclusion: The DECT calibration technique was found to be more robust to situations in which the physical properties of the test materials differed from the materials used during SPR calibration. Furthermore, it was demonstrated that the DECT and SECT SPR calibration techniques can Result in significantly different calculated dose distributions.« less
The EAGLE simulations: atomic hydrogen associated with galaxies
NASA Astrophysics Data System (ADS)
Crain, Robert A.; Bahé, Yannick M.; Lagos, Claudia del P.; Rahmati, Alireza; Schaye, Joop; McCarthy, Ian G.; Marasco, Antonino; Bower, Richard G.; Schaller, Matthieu; Theuns, Tom; van der Hulst, Thijs
2017-02-01
We examine the properties of atomic hydrogen (H I) associated with galaxies in the Evolution and Assembly of GaLaxies and their Environments (EAGLE) simulations of galaxy formation. EAGLE's feedback parameters were calibrated to reproduce the stellar mass function and galaxy sizes at z = 0.1, and we assess whether this calibration also yields realistic H I properties. We estimate the self-shielding density with a fitting function calibrated using radiation transport simulations, and correct for molecular hydrogen with empirical or theoretical relations. The `standard-resolution' simulations systematically underestimate H I column densities, leading to an H I deficiency in low-mass (M⋆ < 1010 M⊙) galaxies and poor reproduction of the observed H I mass function. These shortcomings are largely absent from EAGLE simulations featuring a factor of 8 (2) better mass (spatial) resolution, within which the H I mass of galaxies evolves more mildly from z = 1 to 0 than in the standard-resolution simulations. The largest volume simulation reproduces the observed clustering of H I systems, and its dependence on H I richness. At fixed M⋆, galaxies acquire more H I in simulations with stronger feedback, as they become associated with more massive haloes and higher infall rates. They acquire less H I in simulations with a greater star formation efficiency, since the star formation and feedback necessary to balance the infall rate is produced by smaller gas reservoirs. The simulations indicate that the H I of present-day galaxies was acquired primarily by the smooth accretion of ionized, intergalactic gas at z ≃ 1, which later self-shields, and that only a small fraction is contributed by the reincorporation of gas previously heated strongly by feedback. H I reservoirs are highly dynamic: over 40 per cent of H I associated with z = 0.1 galaxies is converted to stars or ejected by z = 0.
NASA Astrophysics Data System (ADS)
Lorefice, S.; Malengo, A.; Vámossy, C.; Bettin, H.; Toth, H.; do Céu Ferreira, M.; Gosset, A.; Madec, T.; Heinonen, M.; Buchner, C.; Lenard, E.; Spurny, R.; Akcadag, U.; Domostroeva, N.
2008-01-01
The main objective of the EUROMET project 702 was to compare the extent of comparability among eleven participating European national metrology institutes (INRIM (IT), OMH (HU), PTB (DE), BEV (AT), IPQ (PT), LNE (FR), MIKES (FI), GUM (PL), SMU (SK), UME (TR) and VNIIM (RU)) in performing calibrations of high-resolution hydrometers for liquid density determination in the range between 600 kg m-3 and 1300 kg m-3. By means of two groups of four similar transfer standards of excellent metrological characteristics, the participating laboratories were initially divided into two groups (petals) linked by the three density laboratories of INRIM, OMH and PTB. The results of the participating laboratories have been analyzed in this report and a good agreement was found between the results provided by most of the participants. These results allowed also determination of the degrees of equivalence of each NMI participating with the EUROMET_key comparison reference values (EU_KCRV); they will provide a basis for the review of the Calibration Measurement Capabilities (CMC) entries on hydrometer calibration, and they allowed the degree of equivalence between pairs of NMIs to be established. The Istitituto Nazionale di Ricerca Metrologica (INRIM), Italy, formerly IMGC-CNR, coordinated the project. Main text. To reach the main text of this Paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
New bioreactor for in situ simultaneous measurement of bioluminescence and cell density
NASA Astrophysics Data System (ADS)
Picart, Pascal; Bendriaa, Loubna; Daniel, Philippe; Horry, Habib; Durand, Marie-José; Jouvanneau, Laurent; Thouand, Gérald
2004-03-01
This article presents a new device devoted to the simultaneous measurement of bioluminescence and optical density of a bioluminescent bacterial culture. It features an optoelectronic bioreactor with a fully autoclavable module, in which the bioluminescent bacteria are cultivated, a modulated laser diode dedicated to optical density measurement, and a detection head for the acquisition of both bioluminescence and optical density signals. Light is detected through a bifurcated fiber bundle. This setup allows the simultaneous estimation of the bioluminescence and the cell density of the culture medium without any sampling. The bioluminescence is measured through a highly sensitive photomultiplier unit which has been photometrically calibrated to allow light flux measurements. This was achieved by considering the bioluminescence spectrum and the full optical transmission of the device. The instrument makes it possible to measure a very weak light flux of only a few pW. The optical density is determined through the laser diode and a photodiode using numerical synchronous detection which is based on the power spectrum density of the recorded signal. The detection was calibrated to measure optical density up to 2.5. The device was validated using the Vibrio fischeri bacterium which was cultivated under continuous culture conditions. A very good correlation between manual and automatic measurements processed with this instrument has been demonstrated. Furthermore, the optoelectronic bioreactor enables determination of the luminance of the bioluminescent bacteria which is estimated to be 6×10-5 W sr-1 m-2 for optical density=0.3. Experimental results are presented and discussed.
Bone quality evaluation at dental implant site using multislice CT, micro-CT, and cone beam CT.
Parsa, Azin; Ibrahim, Norliza; Hassan, Bassam; van der Stelt, Paul; Wismeijer, Daniel
2015-01-01
The first purpose of this study was to analyze the correlation between bone volume fraction (BV/TV) and calibrated radiographic bone density Hounsfield units (HU) in human jaws, derived from micro-CT and multislice computed tomography (MSCT), respectively. The second aim was to assess the accuracy of cone beam computed tomography (CBCT) in evaluating trabecular bone density and microstructure using MSCT and micro-CT, respectively, as reference gold standards. Twenty partially edentulous human mandibular cadavers were scanned by three types of CT modalities: MSCT (Philips, Best, the Netherlands), CBCT (3D Accuitomo 170, J Morita, Kyoto, Japan), and micro-CT (SkyScan 1173, Kontich, Belgium). Image analysis was performed using Amira (v4.1, Visage Imaging Inc., Carlsbad, CA, USA), 3Diagnosis (v5.3.1, 3diemme, Cantu, Italy), Geomagic (studio(®) 2012, Morrisville, NC, USA), and CTAn (v1.11, SkyScan). MSCT, CBCT, and micro-CT scans of each mandible were matched to select the exact region of interest (ROI). MSCT HU, micro-CT BV/TV, and CBCT gray value and bone volume fraction of each ROI were derived. Statistical analysis was performed to assess the correlations between corresponding measurement parameters. Strong correlations were observed between CBCT and MSCT density (r = 0.89) and between CBCT and micro-CT BV/TV measurements (r = 0.82). Excellent correlation was observed between MSCT HU and micro-CT BV/TV (r = 0.91). However, significant differences were found between all comparisons pairs (P < 0.001) except for mean measurement between CBCT BV/TV and micro-CT BV/TV (P = 0.147). An excellent correlation exists between bone volume fraction and bone density as assessed on micro-CT and MSCT, respectively. This suggests that bone density measurements could be used to estimate bone microstructural parameters. A strong correlation also was found between CBCT gray values and BV/TV and their gold standards, suggesting the potential of this modality in bone quality assessment at implant site. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A microwave method for measuring moisture content, density, and grain angle of wood
W. L. James; Y.-H. Yen; R. J. King
1985-01-01
The attenuation, phase shift and depolarization of a polarized 4.81-gigahertz wave as it is transmitted through a wood specimen can provide estimates of the moisture content (MC), density, and grain angle of the specimen. Calibrations are empirical, and computations are complicated, with considerable interaction between parameters. Measured dielectric parameters,...
ERIC Educational Resources Information Center
Snyder, James
2010-01-01
This dissertation research examined the changes in item RIT calibration that occurred when adding audio to a set of currently calibrated RIT items and then placing these new items as field test items in the modified assessments on the NWEA MAP test platform. The researcher used test results from over 600 students in the Poway School District in…
NASA Astrophysics Data System (ADS)
Sanchez-Valle, Carmen; Malfait, Wim J.
2016-04-01
Although silicate melts comprise only a minor volume fraction of the present day Earth, they play a critical role on the Earth's geochemical and geodynamical evolution. Their physical properties, namely the density, are a key control on many magmatic processes, including magma chamber dynamics and volcanic eruptions, melt extraction from residual rocks during partial melting, as well as crystal settling and melt migration. However, the quantitative modeling of these processes has been long limited by the scarcity of data on the density and compressibility of volatile-bearing silicate melts at relevant pressure and temperature conditions. In the last decade, new experimental designs namely combining large volume presses and synchrotron-based techniques have opened the possibility for determining in situ the density of a wide range of dry and volatile-bearing (H2O and CO2) silicate melt compositions at high pressure-high temperature conditions. In this contribution we will illustrate some of these progresses with focus on recent results on the density of dry and hydrous felsic and intermediate melt compositions (rhyolite, phonolite and andesite melts) at crustal and upper mantle conditions (up to 4 GPa and 2000 K). The new data on felsic-intermediate melts has been combined with in situ data on (ultra)mafic systems and ambient pressure dilatometry and sound velocity data to calibrate a continuous, predictive density model for hydrous and CO2-bearing silicate melts with applications to magmatic processes down to the conditions of the mantle transition zone (up to 2773 K and 22 GPa). The calibration dataset consist of more than 370 density measurements on high-pressure and/or water-and CO2-bearing melts and it is formulated in terms of the partial molar properties of the oxide components. The model predicts the density of volatile-bearing liquids to within 42 kg/m3 in the calibration interval and the model extrapolations up to 3000 K and 100 GPa are in good agreement with results from ab initio calculations. The density model has been applied to examine the mineral-melt buoyancy relations at depth and the implications of these results for the dynamics of magma chambers, crystal settling and the stability and mobility of magmas in the upper mantle will be discussed.
Calibration Of Partial-Pressure-Of-Oxygen Sensors
NASA Technical Reports Server (NTRS)
Yount, David W.; Heronimus, Kevin
1995-01-01
Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.
Approach to derivation of SIR-C science requirements for calibration. [Shuttle Imaging Radar
NASA Technical Reports Server (NTRS)
Dubois, Pascale C.; Evans, Diane; Van Zyl, Jakob
1992-01-01
Many of the experiments proposed for the forthcoming SIR-C mission require calibrated data, for example those which emphasize (1) deriving quantitative geophysical information (e.g., surface roughness and dielectric constant), (2) monitoring daily and seasonal changes in the Earth's surface (e.g., soil moisture), (3) extending local case studies to regional and worldwide scales, and (4) using SIR-C data with other spaceborne sensors (e.g., ERS-1, JERS-1, and Radarsat). There are three different aspects to the SIR-C calibration problem: radiometric and geometric calibration, which have been previously reported, and polarimetric calibration. The study described in this paper is an attempt at determining the science requirements for polarimetric calibration for SIR-C. A model describing the effect of miscalibration is presented first, followed by an example describing how to assess the calibration requirements specific to an experiment. The effects of miscalibration on some commonly used polarimetric parameters are also discussed. It is shown that polarimetric calibration requirements are strongly application dependent. In consequence, the SIR-C investigators are advised to assess the calibration requirements of their own experiment. A set of numbers summarizing SIR-C polarimetric calibration goals concludes this paper.
NASA Technical Reports Server (NTRS)
Misra, Ajay K.
1988-01-01
Liquid densities were determined for a number of fluoride salt mixtures suitable for heat storage in space power applications, using a procedure that consisted of measuring the loss of weight of an inert bob in the melt. The density apparatus was calibrated with pure LiF and NaF at different temperatures. Density data for safe binary and ternary fluoride salt eutectics and congruently melting intermediate compounds are presented. In addition, a comparison was made between the volumetric heat storage capacity of different salt mixtures.
The impact of on-site wastewater from high density cluster developments on groundwater quality
NASA Astrophysics Data System (ADS)
Morrissey, P. J.; Johnston, P. M.; Gill, L. W.
2015-11-01
The net impact on groundwater quality from high density clusters of unsewered housing across a range of hydro(geo)logical settings has been assessed. Four separate cluster development sites were selected, each representative of different aquifer vulnerability categories. Groundwater samples were collected on a monthly basis over a two year period for chemical and microbiological analysis from nested multi-horizon sampling boreholes upstream and downstream of the study sites. The field results showed no statistically significant difference between upstream and downstream water quality at any of the study areas, although there were higher breakthroughs in contaminants in the High and Extreme vulnerability sites linked to high intensity rainfall events; these however, could not be directly attributed to on-site effluent. Linked numerical models were then built for each site using HYDRUS 2D to simulate the attenuation of contaminants through the unsaturated zone from which the resulting hydraulic and contaminant fluxes at the water table were used as inputs into MODFLOW MT3D models to simulate the groundwater flows. The results of the simulations confirmed the field observations at each site, indicating that the existing clustered on-site wastewater discharges would only cause limited and very localised impacts on groundwater quality, with contaminant loads being quickly dispersed and diluted downstream due to the relatively high groundwater flow rates. Further simulations were then carried out using the calibrated models to assess the impact of increasing cluster densities revealing little impact at any of the study locations up to a density of 6 units/ha with the exception of the Extreme vulnerability site.
Forecasting Lightning Threat Using WRF Proxy Fields
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.
2010-01-01
Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.
A new method to measure electron density and effective atomic number using dual-energy CT images
NASA Astrophysics Data System (ADS)
Ramos Garcia, Luis Isaac; Pérez Azorin, José Fernando; Almansa, Julio F.
2016-01-01
The purpose of this work is to present a new method to extract the electron density ({ρ\\text{e}} ) and the effective atomic number (Z eff) from dual-energy CT images, based on a Karhunen-Loeve expansion (KLE) of the atomic cross section per electron. This method was used to calibrate a Siemens Definition CT using the CIRS phantom. The predicted electron density and effective atomic number using 80 kVp and 140 kVp were compared with a calibration phantom and an independent set of samples. The mean absolute deviations between the theoretical and calculated values for all the samples were 1.7 % ± 0.1 % for {ρ\\text{e}} and 4.1 % ± 0.3 % for Z eff. Finally, these results were compared with other stoichiometric method. The application of the KLE to represent the atomic cross section per electron is a promising method for calculating {ρ\\text{e}} and Z eff using dual-energy CT images.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-01
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500 nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, H; Xing, L; Kanehira, T
2016-06-15
Purpose: The aim of this study is to evaluate the feasibility of using a dual-energy CBCT (DECBCT) in proton therapy treatment planning to allow for accurate electron density estimation. Methods: For direct comparison, two scenarios were selected: a dual-energy fan-beam CT (high: 140 kVp, low: 80 kVp) and a DECBCT (high: 125 kVp, low: 80 kVp). A Gammex 467 tissue characterization phantom was used, including the rods of air, water, bone (B2–30% mineral), cortical bone (SB3), lung (LN-300), brain, liver and adipose. For the CBCT, Hounsfield Unit (HU) numbers were first obtained from the reconstructed images after a calibration wasmore » made based on water (=0) and air materials (=−1000). For each tissue surrogate, region-of-interest (ROI) analyses were made to derive high-energy and low-energy HU values (HUhigh and HUlow), which were subsequently used to estimate electron density based on the algorithm as previously described by Hunemohr N., et al. Parameters k1 and k2 are energy dependent and can be derived from calibration materials. Results: While for the dual-energy FBCT, the electron density is found be within +/−3% error relative to the values provided by the phantom vendor: −1.8% (water), 0.03% (lung), 1.1% (brain), −2.82% (adipose), −0.49% (liver) and −1.89% (cortical bones). While for the DECBCT, the estimation of electron density exhibits a relatively larger variation: −1.76% (water), −36.7% (lung), −1.92% (brain), −3.43% (adipose), 8.1% (liver) and 9.5% (cortical bones). Conclusion: For DECBCT, the accuracy of electron density estimation is inferior to that of a FBCT, especially for materials of either low-density (lung) or high density (cortical bone) compared to water. Such limitation arises from inaccurate HU number derivation in a CBCT. Advanced scatter-correction and HU calibration routines, as well as the deployment of photon counting CT detectors need be investigated to minimize the difference between FBCT and CBCT.« less
The contributions of breast density and common genetic variation to breast cancer risk.
Vachon, Celine M; Pankratz, V Shane; Scott, Christopher G; Haeberle, Lothar; Ziv, Elad; Jensen, Matthew R; Brandt, Kathleen R; Whaley, Dana H; Olson, Janet E; Heusinger, Katharina; Hack, Carolin C; Jud, Sebastian M; Beckmann, Matthias W; Schulz-Wendtland, Ruediger; Tice, Jeffrey A; Norman, Aaron D; Cunningham, Julie M; Purrington, Kristen S; Easton, Douglas F; Sellers, Thomas A; Kerlikowske, Karla; Fasching, Peter A; Couch, Fergus J
2015-05-01
We evaluated whether a 76-locus polygenic risk score (PRS) and Breast Imaging Reporting and Data System (BI-RADS) breast density were independent risk factors within three studies (1643 case patients, 2397 control patients) using logistic regression models. We incorporated the PRS odds ratio (OR) into the Breast Cancer Surveillance Consortium (BCSC) risk-prediction model while accounting for its attributable risk and compared five-year absolute risk predictions between models using area under the curve (AUC) statistics. All statistical tests were two-sided. BI-RADS density and PRS were independent risk factors across all three studies (P interaction = .23). Relative to those with scattered fibroglandular densities and average PRS (2(nd) quartile), women with extreme density and highest quartile PRS had 2.7-fold (95% confidence interval [CI] = 1.74 to 4.12) increased risk, while those with low density and PRS had reduced risk (OR = 0.30, 95% CI = 0.18 to 0.51). PRS added independent information (P < .001) to the BCSC model and improved discriminatory accuracy from AUC = 0.66 to AUC = 0.69. Although the BCSC-PRS model was well calibrated in case-control data, independent cohort data are needed to test calibration in the general population. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Breuillard, H.; Henri, P.; Vallières, X.; Eriksson, A. I.; Odelstad, E.; Johansson, F. L.; Richter, I.; Goetz, C.; Wattieaux, G.; Tsurutani, B.; Hajra, R.; Le Contel, O.
2017-12-01
During two years, the groundbreaking ESA/Rosetta mission was able to escort comet 67P where previous cometary missions were only limited to flybys. This enabled for the first time to make in-situ measurements of the evolution of a comet's plasma environment. The density and temperature measured by Rosetta are derived from RPC-Mutual Impedance Probe (MIP) and RPC-Langmuir Probe (LAP). On one hand, low time resolution electron density are calculated using the plasma frequency extracted from the MIP mutual impedance spectra. On the other hand, high time resolution density fluctuations are estimated from the spacecraft potential measured by LAP. In this study, using a simple spacecraft charging model, we perform a cross-calibration of MIP plasma density and LAP spacecraft potential variations to obtain high time resolution measurements of the electron density. These results are also used to constrain the electron temperature. Then we make use of these new dataset, together with RPC-MAG magnetic field measurements, to investigate for the first time the compressibility and the correlations between plasma and magnetic field variations, for both singing comet waves and steepened waves observed, respectively during low and high cometary outgassing activity, in the plasma environment of comet 67P.
Long-Term Stability Assessment of Sonoran Desert for Vicarious Calibration of GOES-R
NASA Astrophysics Data System (ADS)
Kim, W.; Liang, S.; Cao, C.
2012-12-01
Vicarious calibration refers to calibration techniques that do not depend on onboard calibration devices. Although sensors and onboard calibration devices undergo rigorous validation processes before launch, performance of sensors often degrades after the launch due to exposure to the harsh space environment and the aging of devices. Such in-flight changes of devices can be identified and adjusted through vicarious calibration activities where the sensor degradation is measured in reference to exterior calibration sources such as the Sun, the Moon, and the Earth surface. Sonoran desert is one of the best calibration sites located in the North America that are available for vicarious calibration of GOES-R satellite. To accurately calibrate sensors onboard GOES-R satellite (e.g. advanced baseline imager (ABI)), the temporal stability of Sonoran desert needs to be assessed precisely. However, short-/mid-term variations in top-of-atmosphere (TOA) reflectance caused by meteorological variables such as water vapor amount and aerosol loading are often difficult to retrieve, making the use of TOA reflectance time series for the stability assessment of the site. In this paper, we address this issue of normalization of TOA reflectance time series using a time series analysis algorithm - seasonal trend decomposition procedure based on LOESS (STL) (Cleveland et al, 1990). The algorithm is basically a collection of smoothing filters which leads to decomposition of a time series into three additive components; seasonal, trend, and remainder. Since this non-linear technique is capable of extracting seasonal patterns in the presence of trend changes, the seasonal variation can be effectively identified in the time series of remote sensing data subject to various environmental changes. The experiment results performed with Landsat 5 TM data show that the decomposition results acquired for the Sonoran Desert area produce normalized series that have much less uncertainty than those of traditional BRDF models, which leads to more accurate stability assessment.
Design and Theoretical Analysis of a Resonant Sensor for Liquid Density Measurement
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m3. The results also confirm that the method to increase the accuracy of liquid density measurement is feasible. PMID:22969378
Design and theoretical analysis of a resonant sensor for liquid density measurement.
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m(3). The results also confirm that the method to increase the accuracy of liquid density measurement is feasible.
A Critical Evaluation of the Thermophysical Properties of Mercury
NASA Astrophysics Data System (ADS)
Holman, G. J. F.; ten Seldam, C. A.
1994-09-01
For the use of a mercury column for precise pressure measurements—such as the pressurized 30 meter mercury-in-steel column used at the Van der Waals-Zeeman Laboratory for the calibration of piston gauges up to nearly 300 MPa—it is highly important to have accurate knowledge of such properties of mercury as density, isobaric secant and tangent volume thermal expansion coefficients, and isothermal secant and tangent compressibilities as functions of temperature and pressure. In this paper we present a critical assessment of the available information on these properties. Recommended values are given for the properties mentioned and, in addition, for properties derived from theses such as entropy, enthalpy, internal energy, and the specific heat capacities.
Assessment of Noise and Associated Health Impacts at Selected Secondary Schools in Ibadan, Nigeria
Ana, Godson R. E. E.; Shendell, Derek G.; Brown, G. E.; Sridhar, M. K. C.
2009-01-01
Background. Most schools in Ibadan, Nigeria, are located near major roads (mobile line sources). We conducted an initial assessment of noise levels and adverse noise-related health and learning effects. Methods. For this descriptive, cross-sectional study, four schools were selected randomly from eight participating in overall project. We administered 200 questionnaires, 50 per school, assessing health and learning-related outcomes. Noise levels (A-weighted decibels, dBA) were measured with calibrated sound level meters. Traffic density was assessed for school with the highest measured dBA. Observational checklists assessed noise control parameters and building physical attributes. Results. Short-term, cross-sectional school-day noise levels ranged 68.3–84.7 dBA. Over 60% of respondents reported that vehicular traffic was major source of noise, and over 70% complained being disturbed by noise. Three schools reported tiredness, and one school lack of concentration, as the most prevalent noise-related health problems. Conclusion. Secondary school occupants in Ibadan, Nigeria were potentially affected by exposure to noise from mobile line sources. PMID:20041025
Spectroscopic imaging of metal halide high-intensity discharge lamps
NASA Astrophysics Data System (ADS)
Bonvallet, Geoffrey A.
The body of this work consists of three main research projects. An optical- and near-ultraviolet-wavelength absorption study sought to determine absolute densities of ground and excited level Sc atoms, ground level Sc + ions, and ground level Na atoms in a commercial 250 W metal halide high intensity discharge lamp during operation. These measurements also allowed the determination of the arc temperature and absolute electron density as functions of radius. Through infrared emission spectroscopy, relative densities of sodium and scandium were determined as functions of radius. Using the absolute densities gained from the optical experiment, these relative densities were calibrated. In addition, direct observation of the infrared emission allowed us to characterize the infrared power losses of the lamp. When considered as a fraction of the overall power consumption, the near-infrared spectral power losses were not substantial enough to warrant thorough investigation of their reduction in these lamps. The third project was an attempt to develop a portable x-ray diagnostic experiment. Two-dimensional spatial maps of the lamps were analyzed to determine absolute elemental mercury densities and the arc temperature as a function of radius. Two methods were used to improve the calibration of the density measurements and to correct for the spread in x-ray energy: known solutions of mercury in nitric acid, and an arc lamp which was uniformly heated to evaporate the mercury content. Although many complexities arose in this experiment, its goal was successfully completed.
Lacour, C; Joannis, C; Chebbo, G
2009-05-01
This article presents a methodology for assessing annual wet weather Suspended Solids (SS) and Chemical Oxygen Demand (COD) loads in combined sewers, along with the associated uncertainties from continuous turbidity measurements. The proposed method is applied to data from various urban catchments in the cities of Paris and Nantes. The focus here concerns the impact of the number of rain events sampled for calibration (i.e. through establishing linear SS/turbidity or COD/turbidity relationships) on the uncertainty of annual pollutant load assessments. Two calculation methods are investigated, both of which rely on Monte Carlo simulations: random assignment of event-specific calibration relationships to each individual rain event, and the use of an overall relationship built from the entire available data set. Since results indicate a fairly low inter-event variability for calibration relationship parameters, an accurate assessment of pollutant loads can be derived, even when fewer than 10 events are sampled for calibration purposes. For operational applications, these results suggest that turbidity could provide a more precise evaluation of pollutant loads at lower cost than typical sampling methods.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
Electron Density Calibration for Radiotherapy Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrera-Martinez, F.; Rodriguez-Villafuerte, M.; Martinez-Davalos, A.
2006-09-08
Computed tomography (CT) images are used as basic input data for most modern radiosurgery treatment planning systems (TPS). CT data not only provide anatomic information to delineate target volumes, but also allow the introduction of corrections for tissue inhomogeneities into dose calculations during the treatment planning procedure. These corrections involve the determination of a relationship between tissue electron density ({rho}e) and their corresponding Hounsfield Units (HU). In this work, an elemental analysis of different commercial tissue equivalent materials using Scanning Electron Microscopy was carried out to characterize their chemical composition. The tissue equivalent materials were chosen to ensure a largemore » range of {rho}e to be included in the CT scanner calibration. A phantom was designed and constructed with these materials to simulate the size of a human head.« less
Geothermometer calculations for geothermal assessment
Reed, M.J.; Mariner, R.H.
2007-01-01
Geothermal exploration programs have relied on the calculation of geothermometers from hot spring chemistry as an early estimation of geothermal reservoir temperatures. Calibration of the geothermometers has evolved from experimental determinations of mineral solubility as a function of temperature to calibration from analyses of water chemistry from known depths and temperatures in thermal wells. Most of the geothermometers were calibrated from analyses of sodium-chloride type waters, and the application of some geothermometers should be restricted to waters of the chemical types that were used in their calibration. Chemical analyses must be determined to be reliable before they are used to calculate geothermometers. The USGS Geothermal Resource Assessment will rely on the silica geothermometer developed by Giggenbach that approximates the transition between chalcedony at 20??C and quartz at 200??C. Above 200??C, the assessment will rely on the quartz geothermometer. In addition, the assessment will also rely on the potassium-magnesium geothermometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scime, Earl E.
The magnitude and spatial dependence of neutral density in magnetic confinement fusion experiments is a key physical parameter, particularly in the plasma edge. Modeling codes require precise measurements of the neutral density to calculate charge-exchange power losses and drag forces on rotating plasmas. However, direct measurements of the neutral density are problematic. In this work, we proposed to construct a laser-based diagnostic capable of providing spatially resolved measurements of the neutral density in the edge of plasma in the DIII-D tokamak. The diagnostic concept is based on two-photon absorption laser induced fluorescence (TALIF). By injecting two beams of 205 nmmore » light (co or counter propagating), ground state hydrogen (or deuterium or tritium) can be excited from the n = 1 level to the n = 3 level at the location where the two beams intersect. Individually, the beams experience no absorption, and therefore have no difficulty penetrating even dense plasmas. After excitation, a fraction of the hydrogen atoms decay from the n = 3 level to the n = 2 level and emit photons at 656 nm (the H α line). Calculations based on the results of previous TALIF experiments in magnetic fusion devices indicated that a laser pulse energy of approximately 3 mJ delivered in 5 ns would provide sufficient signal-to-noise for detection of the fluorescence. In collaboration with the DIII-D engineering staff and experts in plasma edge diagnostics for DIII-D from Oak Ridge National Laboratory (ORNL), WVU researchers designed a TALIF system capable of providing spatially resolved measurements of neutral deuterium densities in the DIII-D edge plasma. The laser systems were specified, purchased, and assembled at WVU. The TALIF system was tested on a low-power hydrogen discharge at WVU and the plan was to move the instrument to DIII-D for installation in collaboration with ORNL researchers. After budget cuts at DIII-D, the DIII-D facility declined to support installation on their tokamak. Instead, after a no-cost extension, the apparatus was moved to the University of Washington-Seattle and successfully tested on the HIT-SI3 spheromak experiment. As a result of this project, TALIF measurements of the absolutely calibrated neutral density hydrogen and deuterium were obtained in a helicon source and in a spheromak, designs were developed for installation of a TALIF system on a tokamak, and a new, xenon-based calibration scheme was proposed and demonstrated. The xenon-calibration scheme eliminates significant problems that were identified with the standard krypton calibration scheme.« less
Ding, Huanjun; Sennung, David; Cho, Hyo-Min; Molloi, Sabee
2016-01-01
Purpose: The positive predictive power for malignancy can potentially be improved, if the chemical compositions of suspicious breast lesions can be reliably measured in screening mammography. The purpose of this study is to investigate the feasibility of quantifying breast lesion composition, in terms of water and lipid contents, with spectral mammography. Methods: Phantom and tissue samples were imaged with a spectral mammography system based on silicon-strip photon-counting detectors. Dual-energy calibration was performed for material decomposition, using plastic water and adipose-equivalent phantoms as the basis materials. The step wedge calibration phantom consisted of 20 calibration configurations, which ranged from 2 to 8 cm in thickness and from 0% to 100% in plastic water density. A nonlinear rational fitting function was used in dual-energy calibration of the imaging system. Breast lesion phantoms, made from various combinations of plastic water and adipose-equivalent disks, were embedded in a breast mammography phantom with a heterogeneous background pattern. Lesion phantoms with water densities ranging from 0% to 100% were placed at different locations of the heterogeneous background phantom. The water density in the lesion phantoms was measured using dual-energy material decomposition. The thickness and density of the background phantom were varied to test the accuracy of the decomposition technique in different configurations. In addition, an in vitro study was also performed using mixtures of lean and fat bovine tissue of 25%, 50%, and 80% lean weight percentages as the background. Lesions were simulated by using breast lesion phantoms, as well as small bovine tissue samples, composed of carefully weighed lean and fat bovine tissues. The water densities in tissue samples were measured using spectral mammography and compared to measurement using chemical decomposition of the tissue. Results: The thickness of measured and known water contents was compared for various lesion configurations. There was a good linear correlation between the measured and the known values. The root-mean-square errors in water thickness measurements were 0.3 and 0.2 mm for the plastic phantom and bovine tissue backgrounds, respectively. Conclusions: The results indicate that spectral mammography can be used to accurately characterize breast lesion composition in terms of their equivalent water and lipid contents. PMID:27782705
Lee, K R; Dipaolo, B; Ji, X
2000-06-01
Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.
NASA Astrophysics Data System (ADS)
Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.
2010-09-01
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S
2010-09-21
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
The Wally plot approach to assess the calibration of clinical prediction models.
Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T
2017-12-06
A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.
Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.
Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G
2011-12-01
Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (p<0.05) and a modelling efficiency/Nash-Sutcliffe of 0.44 representing the model's ability to predict intra- and inter-annual vector density trends. RVM estimates the density of the former malarial vector An. atroparvus as a function of temperature and of MODIS NDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.
NASA Astrophysics Data System (ADS)
Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.
2010-10-01
This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.
Viking S-band Doppler RMS phase fluctuations used to calibrate the mean 1976 equatorial corona
NASA Technical Reports Server (NTRS)
Berman, A. L.; Wackley, J. A.
1977-01-01
Viking S-band Doppler RMS phase fluctuations (noise) and comparisons of Viking Doppler noise to Viking differenced S-X range measurements are used to construct a mean equatorial electron density model for 1976. Using Pioneer Doppler noise results (at high heliographic latitudes, also from 1976), an equivalent nonequatorial electron density model is approximated.
Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma
2016-11-15
Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH₂O EC-10, ECH₂O EC-20, ECH₂O EC-5, and ECH₂O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH₂O EC-5 and ECH₂O TE, which also performed surprisingly well in saline conditions.
Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma
2016-01-01
Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263
An HF and lower VHF spectrum assessment system exploiting instantaneously wideband capture
NASA Astrophysics Data System (ADS)
Barnes, Rod I.; Singh, Malkiat; Earl, Fred
2017-09-01
We report on a spectral environment evaluation and recording (SEER) system, for instantaneously wideband spectral capture and characterization in the HF and lower VHF band, utilizing a direct digital receiver coupled to a data recorder. The system is designed to contend with a wide variety of electromagnetic environments and to provide accurately calibrated spectral characterization and display from very short (ms) to synoptic scales. The system incorporates a novel RF front end involving automated gain and equalization filter selection which provides an analogue frequency-dependent gain characteristic that mitigates the high dynamic range found across the HF and lower VHF spectrum. The system accurately calibrates its own internal noise and automatically subtracts this from low variance, external spectral estimates, further extending the dynamic range over which robust characterization is possible. Laboratory and field experiments demonstrate that the implementation of these concepts has been effective. Sensitivity to varying antenna load impedance of the internal noise reduction process has been examined. Examples of software algorithms to provide extraction and visualization of spectral behavior over narrowband, wideband, short, and synoptic scales are provided. Application in HF noise spectral density monitoring, spectral signal strength assessment, and electromagnetic interference detection is possible with examples provided. The instantaneously full bandwidth collection provides some innovative applications, and this is demonstrated by the collection of discrete lightning emissions, which form fast ionograms called "flashagrams" in power-delay-frequency plots.
Sternohyoid and diaphragm muscle form and function during postnatal development in the rat.
O'Connell, R A; Carberry, J; O'Halloran, K D
2013-09-01
What is the central question of this study? Co-ordinated activity of the thoracic pump and pharyngeal dilator muscles is critical for maintaining airway calibre and respiratory homeostasis. Whilst postnatal maturation of the diaphragm has been well characterized, surprisingly little is known about the developmental programme in the airway dilator muscles. What is the main finding and its importance? Developmental increases in force-generating capacity and fatigue in the sternohyoid and diaphragm muscles are attributed to a maturational shift in muscle myosin heavy chain phenotype. This maturation is accelerated in the sternohyoid muscle relative to the diaphragm and may have implications for the control of airway calibre in vivo. The striated muscles of breathing, including the thoracic pump and pharyngeal dilator muscles, play a critical role in maintaining respiratory homeostasis. Whilst postnatal maturation of the diaphragm has been well characterized, surprisingly little is known about the developmental programme in airway dilator muscles given that co-ordinated activity of both sets of muscles is needed for the maintenance of airway calibre and effective pulmonary ventilation. The form and function of sternohyoid and diaphragm muscles from Wistar rat pups [postnatal day (PD) 10, 20 and 30] was determined. Isometric contractile and endurance properties were examined in tissue baths containing Krebs solution at 35°C. Myosin heavy chain (MHC) isoform composition was determined using immunofluorescence. Muscle oxidative and glycolytic capacity was assessed by measuring the activities of succinate dehydrogenase and glycerol-3-phosphate dehydrogenase using semi-quantitative histochemistry. Sternohyoid and diaphragm peak isometric force and fatigue increased significantly with postnatal maturation. Developmental myosin disappeared by PD20, whereas MHC2B areal density increased significantly from PD10 to PD30, emerging earlier and to a much greater extent in the sternohyoid muscle. The numerical density of fibres expressing MHC2X and MHC2B increased significantly during development in the sternohyoid. Diaphragm succinate dehydrogenase activity and sternohyoid glycerol-3-phosphate dehydrogenase activity increased significantly with age. Developmental increases in force-generating capacity and fatigue in the sternohyoid and diaphragm muscles are attributed to a postnatal shift in muscle MHC phenotype. The accelerated maturation of the sternohyoid muscle relative to the diaphragm may have implications for the control of airway calibre in vivo.
Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D.
Brookman, M W; Austin, M E; McLean, A G; Carlstrom, T N; Hyatt, A W; Lohr, J
2016-11-01
Thomson scattering produces n e profiles from measurement of scattered laser beam intensity. Rayleigh scattering provides a first calibration of the relation n e ∝ I TS , which depends on many factors (e.g., laser alignment and power, optics, and measurement systems). On DIII-D, the n e calibration is adjusted against an absolute n e from the density-driven cutoff of the 48 channel 2nd harmonic X-mode electron cyclotron emission system. This method has been used to calibrate Thomson n e from the edge to near the core (r/a > 0.15). Application of core electron cyclotron heating improves the quality of cutoff and depth of its penetration into the core, and also changes underlying MHD activity, minimizing crashes which confound calibration. Less fueling is needed as "ECH pump-out" generates a plasma ready to take up gas. On removal of gyrotron power, cutoff penetrates into the core as channels fall successively and smoothly into cutoff.
Density of α-pinene, Β-pinene, limonene, and essence of turpentine
NASA Astrophysics Data System (ADS)
Tavares Sousa, A.; Nieto de Castro, C. A.
1992-03-01
Densities of ga-pinene, Β-pinene, limonene, and essence of turpentine have been measured at 293.15, 298.15, 303.15, 308.15, and 313.15 K, at atmospheric pressure, with a mechanical oscillator densimeter. Benzene and cyclohexane were used as calibration fluids. The precision is of the order of 0.01 kg · m-3, while the accuracy is estimated to be 0.1%. A linear representation of the variation of the density with temperature reproduces the experimental data within 0.2%.
Investigation of Photographic Image Quality Estimators
1980-04-01
WORDS (Conltnu* wi r« y «f •• »iä* ll n«c»*aarr «M läm’lly by ftloc* nuwWo Resolving Power Acutance SENTINEL SIGMA Math Model Modulation Transfer...Bibeman (1973) describes acutance as being "expressed in terms of the mean square of the gradient of . . . density (in a photographic image) with...the density difference AD. for each interval from the (smoothed) microdensitometer trace (calibrated in density units). 4. Compute the gradient -77
Ding, Huanjun; Molloi, Sabee
2012-08-07
A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Sun, B; Li, H
Purpose: The current standard for calculation of photon and electron dose requires conversion of Hounsfield Units (HU) to Electron Density (ED) by applying a calibration curve specifically constructed for the corresponding CT tube voltage. This practice limits the use of the CT scanner to a single tube voltage and hinders the freedom in the selection of optimal tube voltage for better image quality. The objective of this study is to report a prototype CT reconstruction algorithm that provides direct ED images from the raw CT data independently of tube voltages used during acquisition. Methods: A tissue substitute phantom was scannedmore » for Stoichiometric CT calibrations at tube voltages of 70kV, 80kV, 100kV, 120kV and 140kV respectively. HU images and direct ED images were acquired sequentially on a thoracic anthropomorphic phantom at the same tube voltages. Electron densities converted from the HU images were compared to ED obtained from the direct ED images. A 7-field treatment plan was made on all HU and ED images. Gamma analysis was performed to demonstrate quantitatively dosimetric change from the two schemes in acquiring ED. Results: The average deviation of EDs obtained from the direct ED images was −1.5%±2.1% from the EDs from HU images with the corresponding CT calibration curves applied. Gamma analysis on dose calculated on the direct ED images and the HU images acquired at the same tube voltage indicated negligible difference with lowest passing rate at 99.9%. Conclusion: Direct ED images require no CT calibration while demonstrate equivalent dosimetry compared to that obtained from standard HU images. The ability of acquiring direct ED images simplifies the current practice at a safer level by eliminating CT calibration and HU conversion from commissioning and treatment planning respectively. Furthermore, it unlocks a wider range of tube voltages in CT scanner for better imaging quality while maintaining similar dosimetric accuracy.« less
Klop, Corinne; de Vries, Frank; Bijlsma, Johannes W J; Leufkens, Hubert G M; Welsing, Paco M J
2016-01-01
Objectives FRAX incorporates rheumatoid arthritis (RA) as a dichotomous predictor for predicting the 10-year risk of hip and major osteoporotic fracture (MOF). However, fracture risk may deviate with disease severity, duration or treatment. Aims were to validate, and if needed to update, UK FRAX for patients with RA and to compare predictive performance with the general population (GP). Methods Cohort study within UK Clinical Practice Research Datalink (CPRD) (RA: n=11 582, GP: n=38 755), also linked to hospital admissions for hip fracture (CPRD-Hospital Episode Statistics, HES) (RA: n=7221, GP: n=24 227). Predictive performance of UK FRAX without bone mineral density was assessed by discrimination and calibration. Updating methods included recalibration and extension. Differences in predictive performance were assessed by the C-statistic and Net Reclassification Improvement (NRI) using the UK National Osteoporosis Guideline Group intervention thresholds. Results UK FRAX significantly overestimated fracture risk in patients with RA, both for MOF (mean predicted vs observed 10-year risk: 13.3% vs 8.4%) and hip fracture (CPRD: 5.5% vs 3.1%, CPRD-HES: 5.5% vs 4.1%). Calibration was good for hip fracture in the GP (CPRD-HES: 2.7% vs 2.4%). Discrimination was good for hip fracture (RA: 0.78, GP: 0.83) and moderate for MOF (RA: 0.69, GP: 0.71). Extension of the recalibrated UK FRAX using CPRD-HES with duration of RA disease, glucocorticoids (>7.5 mg/day) and secondary osteoporosis did not improve the NRI (0.01, 95% CI −0.04 to 0.05) or C-statistic (0.78). Conclusions UK FRAX overestimated fracture risk in RA, but performed well for hip fracture in the GP after linkage to hospitalisations. Extension of the recalibrated UK FRAX did not improve predictive performance. PMID:26984006
Hydrogen slush density reference system
NASA Technical Reports Server (NTRS)
Weitzel, D. H.; Lowe, L. T.; Ellerbruch, D. A.; Cruz, J. E.; Sindt, C. F.
1971-01-01
A hydrogen slush density reference system was designed for calibration of field-type instruments and/or transfer standards. The device is based on the buoyancy principle of Archimedes. The solids are weighed in a low-mass container so arranged that solids and container are buoyed by triple-point liquid hydrogen during the weighing process. Several types of hydrogen slush density transducers were developed and tested for possible use as transfer standards. The most successful transducers found were those which depend on change in dielectric constant, after which the Clausius-Mossotti function is used to relate dielectric constant and density.
Development and Calibration of an Item Bank for PE Metrics Assessments: Standard 1
ERIC Educational Resources Information Center
Zhu, Weimo; Fox, Connie; Park, Youngsik; Fisette, Jennifer L.; Dyson, Ben; Graber, Kim C.; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De
2011-01-01
The purpose of this study was to develop and calibrate an assessment system, or bank, using the latest measurement theories and methods to promote valid and reliable student assessment in physical education. Using an anchor-test equating design, a total of 30 items or assessments were administered to 5,021 (2,568 boys and 2,453 girls) students in…
NASA Technical Reports Server (NTRS)
Rogers, Raymond R.; Hostetler, Chris A.; Hair, Johnathan W.; Ferrare, Richard A.; Liu, Zhaoyan; Obland, Michael D.; Harper, David B.; Cook, Anthony L.; Powell, Kathleen A.; Vaughan, Mark A.;
2011-01-01
The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) spacecraft has provided global, high-resolution vertical profiles of aerosols and clouds since it became operational on 13 June 2006. On 14 June 2006, the NASA Langley Research Center (LaRC) High Spectral Resolution Lidar (HSRL) was deployed aboard the NASA Langley B-200 aircraft for the first of a series of 86 underflights of the CALIPSO satellite to provide validation measurements for the CALIOP data products. To better assess the range of conditions under which CALIOP data products are produced, these validation flights were conducted under both daytime and nighttime lighting conditions, in multiple seasons, and over a large range of latitudes and aerosol and cloud conditions. This paper presents a quantitative assessment of the CALIOP 532 nm calibration (through the 532 nm total attenuated backscatter) using an internally calibrated airborne HSRL underflight data and is the most extensive study of CALIOP 532 nm calibration. Results show that average HSRL and CALIOP 532 nm total attenuated backscatter agree on average within 2.7% +/- 2.1% (CALIOP lower) at night and within 2.9 % +/- 3.9% (CALIOP lower) during the day., demonstrating the accuracy of the CALIOP 532 nm calibration algorithms. Additionally, comparisons with HSRL show consistency of the CALIOP calibration before and after the laser switch in 2009 as well as improvements in the daytime version 3 calibration scheme compared with the version 2 calibration scheme. Potential systematic uncertainties in the methodology relevant to validating satellite lidar measurements with an airborne lidar system are discussed and found to be less than 3.7% for this validation effort with HSRL. Results from this study are also compared to those from prior assessments of CALIOP calibration and attenuated backscatter.
Characterization and Calibration of the 12-m Antenna in Warkworth, New Zealand
NASA Technical Reports Server (NTRS)
Gulyaev, Sergei; Natusch, Tim; Wilson, David
2010-01-01
The New Zealand 12-m antenna is scheduled to start participating in regular IVS VLBI sessions from the middle of 2010. Characterization procedures and results of calibration of the New Zealand 12- m radio telescope are presented, including the main reflector surface accuracy measurement, pointing model creation, and the system equivalent flux density (SEFD) determination in both S and X bands. Important issues of network connectivity, co-located geodetic systems, and the use of the antenna in education are also discussed.
Boboc, A; Bieg, B; Felton, R; Dalley, S; Kravtsov, Yu
2015-09-01
In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance of the polarimetry as a diagnostic tool in the view of future fusion experiments.
NASA Astrophysics Data System (ADS)
Becerra, Luis Omar
2009-01-01
This SIM comparison on the calibration of high accuracy hydrometers was carried out within fourteen laboratories in the density range from 600 kg/m3 to 1300 kg/m3 in order to evaluate the degree of equivalence among participant laboratories. This key comparison anticipates the planned key comparison CCM.D-K4, and is intended to be linked with CCM.D-K4 when results are available. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Status and performance of the AS array of the Tibet AS sub. gamma. experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amenomori, M.; Bai, Z.W.; Cao, Z.
1991-04-05
The Tibet AS{gamma} experiment, which has started since January, 1990, is located at an altitude of 4,300m at Yangbaging in Tibet, China (90.5{degree} E, 301{degree} N). The air-shower array is composed of 49 scintillation counters for fast timing (FT), each counter having an area of 0.5 m{sup 2}, in a grid pattern with a spacing of 15 m and 16 density detectors. The density detectors surrounding the FT detectors are planned to select the showers of which core are inside the FT array. For the constant and continuous opeation, the system for laser calibration, the ADC,TDC module calibration and themore » power check system were installed. A Rb clock is used for the phase analysis of {gamma}-rays.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboc, A., E-mail: Alexandru.Boboc@ccfe.ac.uk; Felton, R.; Dalley, S.
2015-09-15
In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance ofmore » the polarimetry as a diagnostic tool in the view of future fusion experiments.« less
The Thomson scattering system at Wendelstein 7-X
NASA Astrophysics Data System (ADS)
Pasch, E.; Beurskens, M. N. A.; Bozhenkov, S. A.; Fuchert, G.; Knauer, J.; Wolf, R. C.
2016-11-01
This paper describes the design of the Thomson scattering system at the Wendelstein 7-X stellarator. For the first operation campaign we installed a 10 spatial channel system to cover a radial half profile of the plasma cross section. The start-up system is based on one Nd:YAG laser with 10 Hz repetition frequency, one observation optics, five fiber bundles with one delay line each, and five interference filter polychromators with five spectral channels and silicon avalanche diodes as detectors. High dynamic range analog to digital converters with 14 bit, 1 GS/s are used to digitize the signals. The spectral calibration of the system was done using a pulsed super continuum laser together with a monochromator. For density calibration we used Raman scattering in nitrogen gas. Peaked temperature profiles and flat density profiles are observed in helium and hydrogen discharges.
The 32-GHz performance of the DSS-14 70-meter antenna: 1989 configuration
NASA Technical Reports Server (NTRS)
Gatti, M. S.; Klein, M. J.; Kuiper, T. B. H.
1989-01-01
The results of preliminary 32 GHz calibrations of the 70 meter antenna at Goldstone are presented. Measurements were done between March and July 1989 using Virgo A and Venus as the primary efficiency calibrators. The flux densites of theses radio sources at 32 GHz are not known with high accuracy, but were extrapolated from calibrated data at lower frequencies. The measured value of efficiency (0.35) agreed closely with the predicted value (0.32), and the results are very repeatable. Flux densities of secondary sources used in the observations were subsequently derived. These measurements were performed using a beamswitching radiometer that employed an uncooled high-electron mobility transistor (HEMT) low-noise amplifier. This system was installed primarily to determine the performance of the antenna in its 1989 configuration, but the experience will also aid in successful future calibration of the Deep Space Network (DSN) at this frequency.
Reduction and Analysis of GALFACTS Data in Search of Compact Variable Sources
NASA Astrophysics Data System (ADS)
Wenger, Trey; Barenfeld, S.; Ghosh, T.; Salter, C.
2012-01-01
The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo sky, full-Stokes survey from 1225 to 1525 MHz using the multibeam Arecibo L-band Feed Array (ALFA). Using data from survey field N1, the first field covered by GALFACTS, we are searching for compact sources that vary in intensity and/or polarization. The multistep procedure for reducing the data includes radio frequency interference (RFI) removal, source detection, Gaussian fitting in multiple dimensions, polarization leakage calibration, and gain calibration. We have developed code to analyze and calculate the calibration parameters from the N1 calibration sources, and apply these to the data of the main run. For detected compact sources, our goal is to compare results from multiple passes over a source to search for rapid variability, as well as to compare our flux densities with those from the NRAO VLA Sky Survey (NVSS) to search for longer time-scale variations.
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
Boomsma, Martijn F; Slouwerhof, Inge; van Dalen, Jorn A; Edens, Mireille A; Mueller, Dirk; Milles, Julien; Maas, Mario
2015-11-01
The purpose of this research is to study the use of an internal reference standard for fat- and muscle as a replacement for an external reference standard with a phantom. By using a phantomless internal reference standard, Hounsfield unit (HU) measurements of various tissues can potentially be assessed in patients with a CT scan of the pelvis without an added phantom at time of CT acquisition. This paves the way for development of a tool for quantification of the change in tissue density in one patient over time and between patients. This could make every CT scan made without contrast available for research purposes. Fifty patients with unilateral metal-on-metal total hip replacements, scanned together with a calibration reference phantom used in bone mineral density measurements, were included in this study. On computed tomography scans of the pelvis without the use of intravenous iodine contrast, reference values for fat and muscle were measured in the phantom as well as within the patient's body. The conformity between the references was examined with the intra-class correlation coefficient. The mean HU (± SD) of reference values for fat for the internal- and phantom references were -91.5 (±7.0) and -90.9 (±7.8), respectively. For muscle, the mean HU (± SD) for the internal- and phantom references were 59.2 (±6.2) and 60.0 (±7.2), respectively. The intra-class correlation coefficients for fat and muscle were 0.90 and 0.84 respectively and show excellent agreement between the phantom and internal references. Internal references can be used with similar accuracy as references from an external phantom. There is no need to use an external phantom to asses CT density measurements of body tissue.
Processing of Swarm Accelerometer Data into Thermospheric Neutral Densities
NASA Astrophysics Data System (ADS)
Doornbos, E.; Siemes, C.; Encarnacao, J.; Peřestý, R.; Grunwaldt, L.; Kraus, J.; Holmdahl Olsen, P. E.; van den IJssel, J.; Flury, J.; Apelbaum, G.
2015-12-01
The Swarm satellites were launched on 22 November 2013 and carry accelerometers and GPS receivers as part of their scientific payload. The GPS receivers are not only used for locating the position and time of the magnetic measurements, but also for determining non-gravitational forces like drag and radiation pressure acting on the spacecraft. The accelerometers measure these forces directly, at much finer resolution than the GPS receivers, from which thermospheric neutral densities and potentially winds can be derived. Unfortunately, the acceleration measurements suffer from a variety of disturbances, the most prominent being slow temperature-induced bias variations and sudden bias changes. These disturbances have caused a significant delay of the accelerometer data release. In this presentation, we describe the new three-stage processing that is required for transforming the disturbed acceleration measurements into scientifically valuable thermospheric neutral densities. In the first stage, the sudden bias changes in the acceleration measurements are removed using a dedicated software tool. The second stage is the calibration of the accelerometer measurements against the non-gravitational accelerations derived from the GPS receiver, which includes the correction for the slow temperature-induced bias variations. The third stage consists of transforming the corrected and calibrated accelerations into thermospheric neutral densities. We describe the methods used in each stage, highlight the difficulties encountered, and comment on the quality of the thermospheric neutral density data set, which covers the geomagnetic storm on 17 March 2015.
Jiang, Ze-Hui; Wang, Yu-Rong; Fei, Ben-Hua; Fu, Feng; Hse, Chung-Yun
2007-06-01
Rapid prediction of annual ring density of Paulownia elongate standing trees using near infrared spectroscopy was studied. It was non-destructive to collect the samples for trees, that is, the wood cores 5 mm in diameter were unthreaded at the breast height of standing trees instead of fallen trees. Then the spectra data were collected by autoscan method of NIR. The annual ring density was determined by mercury immersion. And the models were made and analyzed by the partial least square (PLS) and full cross validation in the 350-2 500 nm wavelength range. The results showed that high coefficients were obtained between the annual ring and the NIR fitted data. The correlation coefficient of prediction model was 0.88 and 0.91 in the middle diameter and bigger diameter, respectively. Moreover, high coefficients of correlation were also obtained between annual ring density laboratory-determined and the NIR fitted data in the middle diameter of Paulownia elongate standing trees, the correlation coefficient of calibration model and prediction model were 0.90 and 0.83, and the standard errors of calibration (SEC) and standard errors of prediction(SEP) were 0.012 and 0.016, respectively. The method can simply, rapidly and non-destructively estimate the annual ring density of the Paulownia elongate standing trees close to the cutting age.
Shepherd, John A; Fan, Bo; Lu, Ying; Wu, Xiao P; Wacker, Wynn K; Ergun, David L; Levine, Michael A
2012-10-01
Dual-energy x-ray absorptiometry (DXA) is used to assess bone mineral density (BMD) and body composition, but measurements vary among instruments from different manufacturers. We sought to develop cross-calibration equations for whole-body bone density and composition derived using GE Healthcare Lunar and Hologic DXA systems. This multinational study recruited 199 adult and pediatric participants from a site in the US (n = 40, ages 6 through 16 years) and one in China (n = 159, ages 5 through 81 years). The mean age of the participants was 44.2 years. Each participant was scanned on both GE Healthcare Lunar and Hologic Discovery or Delphi DXA systems on the same day (US) or within 1 week (China) and all scans were centrally analyzed by a single technologist using GE Healthcare Lunar Encore version 14.0 and Hologic Apex version 3.0. Paired t-tests were used to test the results differences between the systems. Multiple regression and Deming regressions were used to derive the cross-conversion equations between the GE Healthcare Lunar and Hologic whole-body scans. Bone and soft tissue measures were highly correlated between the GE Healthcare Lunar and Hologic and systems, with r ranging from 0.96 percent fat [PFAT] to 0.98 (BMC). Significant differences were found between the two systems, with average absolute differences for PFAT, BMC, and BMD of 1.4%, 176.8 g and 0.013 g/cm(2) , respectively. After cross-calibration, no significant differences remained between GE Healthcare Lunar measured results and the results converted from Hologic. The equations we derived reduce differences between BMD and body composition as determined by GE Healthcare Lunar and Hologic systems and will facilitate combining study results in clinical or epidemiological studies. Copyright © 2012 American Society for Bone and Mineral Research.
Walsh, Colin G; Sharman, Kavya; Hripcsak, George
2017-12-01
Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.
Interferometric Imaging Directly with Closure Phases and Closure Amplitudes
NASA Astrophysics Data System (ADS)
Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh
2018-04-01
Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.
Augmenting epidemiological models with point-of-care diagnostics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Augmenting epidemiological models with point-of-care diagnostics data
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...
2016-04-20
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
NASA Astrophysics Data System (ADS)
Yatom, Shurik; Luo, Yuchen; Xiong, Qing; Bruggeman, Peter J.
2017-10-01
Gas phase non-equilibrium plasmas jets containing water vapor are of growing interest for many applications. In this manuscript, we report a detailed study of an atmospheric pressure nanosecond pulsed Ar + 0.26% H2O plasma jet. The plasma jet operates in an atmospheric pressure air surrounding but is shielded with a coaxial argon flow to limit the air diffusion into the jet effluent core. The jet impinges on a metal plate electrode and produces a stable plasma filament (transient spark) between the needle electrode in the jet and the metal plate. The stable plasma filament is characterized by spatially and time resolved electrical and optical diagnostics. This includes Rayleigh scattering, Stark broadening of the hydrogen Balmer lines and two-photon absorption laser induced fluorescence (TaLIF) to obtain the gas temperature, the electron density and the atomic hydrogen density respectively. Electron densities and atomic hydrogen densities up to 5 × 1022 m-3 and 2 × 1022 m-3 have been measured. This shows that atomic hydrogen is one of the main species in high density Ar-H2O plasmas. The gas temperature does not exceed 550 K in the core of the plasma. To enable in situ calibration of the H TaLIF at atmospheric pressure a previously published O density calibration scheme is extended to include a correction for the line profiles by including overlap integrals as required by H TaLIF. The line width of H TaLIF, due to collision broadening has the same trend as the neutral density obtained by Rayleigh scattering. This suggests the possibility to use this technique to in situ probe neutral gas densities.
AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.
Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco
2016-12-01
Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Determination of Stark parameters by cross-calibration in a multi-element laser-induced plasma
NASA Astrophysics Data System (ADS)
Liu, Hao; Truscott, Benjamin S.; Ashfold, Michael N. R.
2016-05-01
We illustrate a Stark broadening analysis of the electron density Ne and temperature Te in a laser-induced plasma (LIP), using a model free of assumptions regarding local thermodynamic equilibrium (LTE). The method relies on Stark parameters determined also without assuming LTE, which are often unknown and unavailable in the literature. Here, we demonstrate that the necessary values can be obtained in situ by cross-calibration between the spectral lines of different charge states, and even different elements, given determinations of Ne and Te based on appropriate parameters for at least one observed transition. This approach enables essentially free choice between species on which to base the analysis, extending the range over which these properties can be measured and giving improved access to low-density plasmas out of LTE. Because of the availability of suitable tabulated values for several charge states of both Si and C, the example of a SiC LIP is taken to illustrate the consistency and accuracy of the procedure. The cross-calibrated Stark parameters are at least as reliable as values obtained by other means, offering a straightforward route to extending the literature in this area.
Computer Generated Hologram System for Wavefront Measurement System Calibration
NASA Technical Reports Server (NTRS)
Olczak, Gene
2011-01-01
Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.
Flux density calibration in diffuse optical tomographic systems.
Biswas, Samir Kumar; Rajan, Kanhirodan; Vasu, Ram M
2013-02-01
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Φ) at the nodal points of the mesh. The experimentally measured flux (Umeasured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Φ) from Umeasuredcal. In the first approach, the measurement data with a homogeneous phantom (Umeasuredhomo) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (Umeasuredhetero) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach.
Faraday-effect polarimeter-interferometer system for current density measurement on EAST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H. Q.; Jie, Y. X., E-mail: yx-jie@ipp.ac.cn; Zou, Z. Y.
2014-11-15
A multichannel far-infrared laser-based POlarimeter-INTerferometer (POINT) system utilizing the three-wave technique is under development for current density and electron density profile measurements in the EAST tokamak. Novel molybdenum retro-reflectors are mounted in the inside wall for the double-pass optical arrangement. A Digital Phase Detector with 250 kHz bandwidth, which will provide real-time Faraday rotation angle and density phase shift output, have been developed for use on the POINT system. Initial calibration indicates the electron line-integrated density resolution is less than 5 × 10{sup 16} m{sup −2} (∼2°), and the Faraday rotation angle rms phase noise is <0.1°.
ERIC Educational Resources Information Center
Murrieta, Hector; Amerson, Gordon
2011-01-01
The purpose of this study was to validate the development and proposal of what the authors call STEMs (Standards Tests to Evaluate Mastery) and have defined them as calibrated classroom assessments that increase student motivation and provide authentic evaluation of student learning. Theoretical and empirical research on classroom assessment and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, J; Brown, B; Bayles, B
The overall goal is to develop high-performance corrosion-resistant iron-based amorphous-metal coatings for prolonged trouble-free use in very aggressive environments: seawater & hot geothermal brines. The specific technical objectives are: (1) Synthesize Fe-based amorphous-metal coating with corrosion resistance comparable/superior to Ni-based Alloy C-22; (2) Establish processing parameter windows for applying and controlling coating attributes (porosity, density, bonding); (3) Assess possible cost savings through substitution of Fe-based material for more expensive Ni-based Alloy C-22; (4) Demonstrate practical fabrication processes; (5) Produce quality materials and data with complete traceability for nuclear applications; and (6) Develop, validate and calibrate computational models to enable lifemore » prediction and process design.« less
NASA Technical Reports Server (NTRS)
Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.
1993-01-01
Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).
Bonifazi, Giuseppe; Capobianco, Giuseppe; Serranti, Silvia
2018-06-05
The aim of this work was to recognize different polymer flakes from mixed plastic waste through an innovative hierarchical classification strategy based on hyperspectral imaging, with particular reference to low density polyethylene (LDPE) and high-density polyethylene (HDPE). A plastic waste composition assessment, including also LDPE and HDPE identification, may help to define optimal recycling strategies for product quality control. Correct handling of plastic waste is essential for its further "sustainable" recovery, maximizing the sorting performance in particular for plastics with similar characteristics as LDPE and HDPE. Five different plastic waste samples were chosen for the investigation: polypropylene (PP), LDPE, HDPE, polystyrene (PS) and polyvinyl chloride (PVC). A calibration dataset was realized utilizing the corresponding virgin polymers. Hyperspectral imaging in the short-wave infrared range (1000-2500nm) was thus applied to evaluate the different plastic spectral attributes finalized to perform their recognition/classification. After exploring polymer spectral differences by principal component analysis (PCA), a hierarchical partial least squares discriminant analysis (PLS-DA) model was built allowing the five different polymers to be recognized. The proposed methodology, based on hierarchical classification, is very powerful and fast, allowing to recognize the five different polymers in a single step. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Morales, Abed; Quiroga, Aldo; Daued, Arturo; Cantero, Diana; Sequeira, Francisco; Castro, Luis Carlos; Becerra, Luis Omar; Salazar, Manuel; Vega, Maria
2017-01-01
A supplementary comparison was made between SIM laboratories concerning the calibration of four hydrometers within the range of 600 kg/m3 to 2000 kg/m3. The main objectives of the comparison were to evaluate the degree of equivalences SIM NMIs in the calibration of hydrometers of high accuracy. The participant NMIs were: CENAM, IBMETRO, INEN, INDECOPI, INM, INTN and LACOMET. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopatiuk-Tirpak, O.; Langen, K. M.; Meeks, S. L.
2008-09-15
The performance of a next-generation optical computed tomography scanner (OCTOPUS-5X) is characterized in the context of three-dimensional gel dosimetry. Large-volume (2.2 L), muscle-equivalent, radiation-sensitive polymer gel dosimeters (BANG-3) were used. Improvements in scanner design leading to shorter acquisition times are discussed. The spatial resolution, detectable absorbance range, and reproducibility are assessed. An efficient method for calibrating gel dosimeters using the depth-dose relationship is applied, with photon- and electron-based deliveries yielding equivalent results. A procedure involving a preirradiation scan was used to reduce the edge artifacts in reconstructed images, thereby increasing the useful cross-sectional area of the dosimeter by nearly amore » factor of 2. Dose distributions derived from optical density measurements using the calibration coefficient show good agreement with the treatment planning system simulations and radiographic film measurements. The feasibility of use for motion (four-dimensional) dosimetry is demonstrated on an example comparing dose distributions from static and dynamic delivery of a single-field photon plan. The capability to visualize three-dimensional dose distributions is also illustrated.« less
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Planck 2013 results. VIII. HFI photometric calibration and mapmaking
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Filliard, C.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Lellouch, E.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Maurin, L.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rusholme, B.; Santos, D.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Techene, S.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper describes the methods used to produce photometrically calibrated maps from the Planck High Frequency Instrument (HFI) cleaned, time-ordered information. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best calibration accuracy over such a large range, two different photometric calibration schemes have to be used. The 545 and 857 GHz data are calibrated by comparing flux-density measurements of Uranus and Neptune with models of their atmospheric emission. The lower frequencies (below 353 GHz) are calibrated using the solar dipole. A component of this anisotropy is time-variable, owing to the orbital motion of the satellite in the solar system. Photometric calibration is thus tightly linked to mapmaking, which also addresses low-frequency noise removal. By comparing observations taken more than one year apart in the same configuration, we have identified apparent gain variations with time. These variations are induced by non-linearities in the read-out electronics chain. We have developed an effective correction to limit their effect on calibration. We present several methods to estimate the precision of the photometric calibration. We distinguish relative uncertainties (between detectors, or between frequencies) and absolute uncertainties. Absolute uncertainties lie in the range from 0.54% to 10% from 100 to 857 GHz. We describe the pipeline used to produce the maps from the HFI timelines, based on the photometric calibration parameters, and the scheme used to set the zero level of the maps a posteriori. We also discuss the cross-calibration between HFI and the SPIRE instrument on board Herschel. Finally we summarize the basic characteristics of the set of HFI maps included in the 2013 Planck data release.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
NASA Astrophysics Data System (ADS)
Liu, X.; Kowalewski, M. G.; Janz, S. J.; Bhartia, P. K.; Chance, K.; Krotkov, N. A.; Pickering, K. E.; Crawford, J. H.
2011-12-01
The DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) mission has just finished its first flight campaign in the Baltimore-Washington D.C. area in July 2011. The ACAM, flown on board the NASA UC-12 aircraft, includes two spectrographs covering the spectral region 304-900 nm and a high-definition video camera, and is expected to provide column measurements of several important air quality trace gases and aerosols for the DISCOVER-AQ mission. The quick look results for NO2 have been shown to very useful in capturing the strong spatiotemporal variability of NO2. Preliminary fitting of UV/Visible spectra has shown that ACAM measurements have adequate signal to noise ratio to measure the trace gases O2, NO2, HCHO, and maybe SO2 and CHOCHO, at individual pixel resolution, although a great deal of effort is needed to improve the instrument calibration and derive proper reference spectrum for retrieving absolute trace gas column densities. In this study, we present analysis of ACAM instrument calibration including slit function, wavelength registration, and radiometric calibration for both nadir-viewing and zenith-sky measurements. Based on this analysis, an irradiance reference spectrum at ACAM resolution will be derived from a high-resolution reference spectrum with additional correction to account for instrument calibration. Using the derived reference spectrum and/or the measured zenith sky measurements, we will perform non-linear least squares fitting to investigate the retrievals of slant column densities of these trace gases from ACAM measurements, and also use an optimal estimation based algorithm including full radiative transfer calculations to derive the vertical column densities of these trace gases. The initial results will be compared with available in-situ and ground-based measurements taken during the DISCOVER-AQ campaign.
The local interstellar helium density - Corrected
NASA Technical Reports Server (NTRS)
Freeman, J.; Paresce, F.; Bowyer, S.
1979-01-01
An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.
Sunku, Raghavendra; Roopesh, R; Kancherla, Pavan; Perumalla, Kiran Kumar; Yudhistar, Palla Venkata; Reddy, V Sridhar
2011-11-01
The objective of this study was to evaluate density changes around the apices of teeth during orthodontic treatment by using digital subtraction radiography to measure the densities around six teeth (maxilla central incisors, lateral incisors, and canines) before and after orthodontic treatment in 36 patients and also assess treatment variables and their coorelation with root resorption. A total of 36 consecutive patient files were selected initially. The selected patients presented with a class I or II relationship and were treated with or without premolar extractions and fixed appliances. Some class II patients were treated additionally with extraoral forces or functional appliances. External apical root resorption (EARR) per tooth in millimeters was calculated and was also expressed as a percentage of the original root length. Image reconstruction and subtraction were performed using the software Regeemy Image Registration and Mosaicing (version 0.2.43-RCB, DPI-INPE, Sao Jose dos Campos, Sao Paulo, Brazil) by a single operator. A region of interest (ROI) was defined in the apical third of the root and density calibration was made in Image J® using enamel (gray value = 255) as reference in the same image. The mean gray values in the ROIs were reflective of the change in the density values between the two images. The root resorption of the tooth and the factors of malocclusion were analyzed with a one-way ANOVA. An independent t-test was performed to compare the mean amount of resorption between male and female, between extraction and nonextraction cases. The density changes after orthodontic treatment were analyzed using the Wilcoxon signedrank test. In addition, the density changes in different teeth were analyzed using the Kruskal-Wallis test. The cut-off for statistical significance was a p-value of 0.05. All the statistical analyses were carried out using SPSS (version 13.0 for Windows, Chicago, IL, USA). Gender, the age at which treatment was started and Angle's classification was not statistically related with observed root resorption. The mean percentage density reduction as assessed by DSR was greatest in both central incisor: by 27.2 and 25.2% in the upper-right and upper-left central incisors, respectively, followed by the upper-right and upper-left canine teeth (23.5 and 21.0%) and then the upper-right and upper-left lateral incisors (19.1 and 17.4%). Tooth extraction prior to treatment initiation and the duration of orthodontic treatment was positively correlated with the amount of root resorption. DSR is useful for evaluating density changes around teeth during orthodontic treatment. The density around the apices of teeth reduced significantly after the application of orthodontic forces during treatment. Assessment of density changes on treatment radiographs of patients undergoing orthodontic therapy may help in the monitoring of external apical root resorption during course of treatment.
Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla
2009-01-01
Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method
NASA Astrophysics Data System (ADS)
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.
The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.
Stransky, D; Bares, V; Fatka, P
2007-01-01
Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.
NASA Astrophysics Data System (ADS)
Burchell, M. J.; Kearsley, A. T.; Wozniakiewicz, P. J.; Hörz, F.; Borg, J.; Graham, G. A.; Leroux, H.; Bridges, J. C.; Bland, P. A.; Bradley, J. P.; Dai, Z. R.; Teslich, N.; See, T.; Warren, J.; Bastien, R.; Hoppe, P.; Heck, P. R.; Huth, J.; Stadermann, F. J.; Floss, C.; Marhas, K.; Stephan, T.; Leitner, J.; Green, S. F.
2007-08-01
The NASA Stardust mission (1) to comet 81P/Wild-2 returned to Earth in January 2006 carrying a cargo of dust captured intact in aerogel and as residue rich craters in aluminium foils (2). Although the aerogel (and its content of dust grains) has gathered most attention, the foils have also been subject to extensive analysis. Many groups contributed to the dimensional characterization of representative populations of foilcraters in the Preliminary Examination and combined with a laboratory calibration this yielded a particle size distribution of the dust encountered during the fly by of the comet (3). The calibration experiments will be described in this paper in detail. They involved using the two stage light gas gun at the University of Kent (4) to impact Stardust grade aluminium foils (from the same batch as used on Stardust) with projectiles at 6.1 km/s (the cometary encounter speed). A variety of projectiles were used to simulate possible cometary dust grain composition, morphology and structure. Prior to the return of Stardust, glass beads were used to provide the initial calibration (5) which was used to obtain the size distribution reported in (3). A range of projectiles of differing density were then used (6) to determine the sensitivity of the results to impactor density (also allowed for in (5)). Subsequently this work has been significantly extended (7) to allow for a greater range of projectile densities and strengths. The work has now been extended further to allow for aggregate impactors which have a high individual grain density, but a low overall bulk density. In addition, the results have been extended down in impactor size from the previous lower limit of 10 microns to 1.5 micron impactor diameter. The application of these new calibration results to the measurement of the cometary dust size distribution will be discussed. It will be shown that the changes are within the range originally presented in (3). The results will be compared to the dust size distribution obtained from the tracks in the aerogel and the combined results contrasted to those obtained with active impact detectors in real time during the cometary encounter (8, 9). At small dust grain sizes (a few microns and below) a significant discrepancy is seen which is still unexplained. References (1) Brownlee D.E. et al., J. Geophys. Res. 108, E10, 8111, 2003. (2) Brownlee D.E. et al., Science 314, 1711 - 1716, 2006. (3) Hörz F. et al., Science 314, 1716 - 1719, 2006. (4) Burchell M.J. et al., Meas. Sci. Technol. 10, 41 - 50, 1999. (5) Kearsley A.T. et al., MAPS 41, 167 - 180, 2006. (6) Kearsley A.T. et al., MAPS 42, 191 - 210, 2007. (7) Kearsley A.T. et al., MAPS submitted, 2007. (8) Tuzzolino A.J. et al., Science 304, 1776 - 1780. (9) Green, S.F. et al., J. Geophys. Res. 109, E12S04, 2004.
NASA Astrophysics Data System (ADS)
Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration
2018-07-01
In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.
Bayesian calibration of the Community Land Model using surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less
Improved cross-calibration of Thomson scattering and electron cyclotron emission with ECH on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brookman, M. W., E-mail: brookmanmw@fusion.gat.com; Austin, M. E.; McLean, A. G.
2016-11-15
Thomson scattering produces n{sub e} profiles from measurement of scattered laser beam intensity. Rayleigh scattering provides a first calibration of the relation n{sub e} ∝ I{sub TS}, which depends on many factors (e.g., laser alignment and power, optics, and measurement systems). On DIII-D, the n{sub e} calibration is adjusted against an absolute n{sub e} from the density-driven cutoff of the 48 channel 2nd harmonic X-mode electron cyclotron emission system. This method has been used to calibrate Thomson n{sub e} from the edge to near the core (r/a > 0.15). Application of core electron cyclotron heating improves the quality of cutoffmore » and depth of its penetration into the core, and also changes underlying MHD activity, minimizing crashes which confound calibration. Less fueling is needed as “ECH pump-out” generates a plasma ready to take up gas. On removal of gyrotron power, cutoff penetrates into the core as channels fall successively and smoothly into cutoff.« less
NASA Astrophysics Data System (ADS)
Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten
2018-02-01
The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.
NASA Astrophysics Data System (ADS)
Lorefice, Salvatore; Malta, Dalni; Julio Pinheiro, José; Marteleto, Paulo Roberto
2010-01-01
The results of the SIM.M.D-S2 bilateral comparison between INRIM-Italy and INMETRO-Brazil are summarized in this report. The aims of this comparison were to check the stated uncertainty levels and the degrees of equivalence between the two institutes on the calibration of hydrometers for liquid density in the range of 800 kg m-3 to 1000 kg m-3 at 20 ºC, by means of two transfer standards of excellent metrological characteristics. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by SIM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Atmospheric densities derived from CHAMP/STAR accelerometer observations
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Tamagnan, D.; Biancale, R.
2004-03-01
The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.
NASA Metrology and Calibration, 1980
NASA Technical Reports Server (NTRS)
1981-01-01
The proceedings of the fourth annual NASA Metrology and Calibration Workshop are presented. This workshop covered (1) review and assessment of NASA metrology and calibration activities by NASA Headquarters, (2) results of audits by the Office of Inspector General, (3) review of a proposed NASA Equipment Management System, (4) current and planned field center activities, (5) National Bureau of Standards (NBS) calibration services for NASA, (6) review of NBS's Precision Measurement and Test Equipment Project activities, (7) NASA instrument loan pool operations at two centers, (8) mobile cart calibration systems at two centers, (9) calibration intervals and decals, (10) NASA Calibration Capabilities Catalog, and (11) development of plans and objectives for FY 1981. Several papers in this proceedings are slide presentations only.
NASA Technical Reports Server (NTRS)
Wu, Aisheng; Xiong, Xiaoxiong; Cao, Changyong; Chiang, Kwo-Fu
2016-01-01
The first Visible Infrared Imaging Radiometer Suite (VIIRS) is onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite. As a primary sensor, it collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in the spectral regions from visible (VIS) to long-wave infrared. NASA's National Polar-orbiting Partnership (NPP) VIIRS Characterization Support Team has been actively involved in the VIIRS radiometric and geometric calibration to support its Science Team Principal Investigators for their independent quality assessment of VIIRS Environmental Data Records. This paper presents the performance assessment of the radiometric calibration stability of the VIIRS VIS and NIR spectral bands using measurements from SNPP VIIRS and Aqua MODIS simultaneous nadir overpasses and over the invariant surface targets at the Libya-4 desert and Antarctic Dome Concordia snow sites. The VIIRS sensor data records (SDRs) used in this paper are reprocessed by the NASA SNPP Land Product Evaluation and Analysis Tool Element. This paper shows that the reprocessed VIIRS SDRs have been consistently calibrated from the beginning of the mission, and the calibration stability is similar to or better than MODIS. Results from different approaches indicate that the calibrations of the VIIRS VIS and NIR spectral bands are maintained to be stable to within 1% over the first three-year mission. The absolute calibration differences between VIIRS and MODIS are within 2%, with an exception for the 0.865-m band, after correction of their spectral response differences.
Wavelength calibration with PMAS at 3.5 m Calar Alto Telescope using a tunable astro-comb
NASA Astrophysics Data System (ADS)
Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Sandin, C.; Zajnulina, M.; Kelz, A.; Giannone, D.; Rutowska, M.; Moralejo, B.; Roth, M. M.; Wysmolek, M.; Sayinc, H.
2018-05-01
On-sky tests conducted with an astro-comb using the Potsdam Multi-Aperture Spectrograph (PMAS) at the 3.5 m Calar Alto Telescope are reported. The proposed astro-comb approach is based on cascaded four-wave mixing between two lasers propagating through dispersion optimized nonlinear fibers. This approach allows for a line spacing that can be continuously tuned over a broad range (from tens of GHz to beyond 1 THz) making it suitable for calibration of low- medium- and high-resolution spectrographs. The astro-comb provides 300 calibration lines and his line-spacing is tracked with a wavemeter having 0.3 pm absolute accuracy. First, we assess the accuracy of Neon calibration by measuring the astro-comb lines with (Neon calibrated) PMAS. The results are compared with expected line positions from wavemeter measurement showing an offset of ∼5-20 pm (4%-16% of one resolution element). This might be the footprint of the accuracy limits from actual Neon calibration. Then, the astro-comb performance as a calibrator is assessed through measurements of the Ca triplet from stellar objects HD3765 and HD219538 as well as with the sky line spectrum, showing the advantage of the proposed astro-comb for wavelength calibration at any resolution.
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941
Primary calibrations of radionuclide solutions and sources for the EML quality assessment program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisenne, I.M.
1993-12-31
The quality assurance procedures established for the operation of the U.S. Department of Energy`s Environmental Measurements Laboratory (DOE-EML`s) Quality Assessment Program (QAP) are essentially the same as those that are in effect for any EML program involving radiometric measurements. All these programs have at their core the use of radionuclide standards for their instrument calibration. This paper focuses on EML`s approach to the acquisition, calibration and application of a wide range of radionuclide sources that are required to meet its programmatic needs.
Electron density measurements in STPX plasmas
NASA Astrophysics Data System (ADS)
Clark, Jerry; Williams, R.; Titus, J. B.; Mezonlin, E. D.; Akpovo, C.; Thomas, E.
2017-10-01
Diagnostics have been installed to measure the electron density of Spheromak Turbulent Physics Experiment (STPX) plasmas at Florida A. & M. University. An insertable probe, provided by Auburn University, consisting of a combination of a triple-tipped Langmuir probe and a radial array consisting of three ion saturation current / floating potential rings has been installed to measure instantaneous plasma density, temperature and plasma potential. As the ramp-up of the experimental program commences, initial electron density measurements from the triple-probe show that the electron density is on the order of 1019 particles/m3. For a passive measurement, a CO2 interferometer system has been designed and installed for measuring line-averaged densities and to corroborate the Langmuir measurements. We describe the design, calibration, and performance of these diagnostic systems on large volume STPX plasmas.
NASA Astrophysics Data System (ADS)
Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn
2014-05-01
Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.
Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Salvio, A.; Bedwani, S.; Carrier, J-F.
2014-08-15
Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less
Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.
ERIC Educational Resources Information Center
Johnson, Matthew S.; Sinharay, Sandip
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
The fossilized birth–death process for coherent calibration of divergence-time estimates
Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja
2014-01-01
Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181
1998 Calibration of the Mach 4.7 and Mach 6 Arc-Heated Scramjet Test Facility Nozzles
NASA Technical Reports Server (NTRS)
Witte, David W.; Irby, Richard G.; Auslender, Aaron H.; Rock, Kenneth E.
2004-01-01
A calibration of the Arc-Heated Scramjet Test Facility (AHSTF) Mach 4.7 and Mach 6 nozzles was performed in 1998. For each nozzle, three different typical facility operating test points were selected for calibration. Each survey consisted of measurements, at 340 separate locations across the 11 inch square nozzle exit plane, of pitot pressure, static pressure, and total temperature. Measurement density was higher (4/inch) in the boundary layer near the nozzle wall than in the core nozzle flow (1/inch). The results generated for each of these calibration surveys were contour plots at the nozzle exit plane of the measured and calculated flow properties which completely defined the thermodynamic state of the nozzle exit flow. An area integration of the mass flux at the nozzle exit for each survey was compared to the AHSTF mass flow meter results to provide an indication of the overall quality of the calibration performed. The percent difference between the integrated nozzle exit mass flow and the flow meter ranged from 0.0 to 1.3 percent for the six surveys. Finally, a comparison of this 1998 calibration was made with the 1986 calibration. Differences of less than 10 percent were found within the nozzle core flow while in the boundary layer differences on the order of 20 percent were quite common.
Calibration of the clumped isotope thermometer for planktic foraminifers
NASA Astrophysics Data System (ADS)
Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.
2017-12-01
Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship measured in planktic foraminifers.
Photonic sensing in highly concentrated biotechnical processes by photon density wave spectroscopy
NASA Astrophysics Data System (ADS)
Hass, Roland; Sandmann, Michael; Reich, Oliver
2017-04-01
Photon Density Wave (PDW) spectroscopy is introduced as a new approach for photonic sensing in highly concentrated biotechnical processes. It independently quantifies the absorption and reduced scattering coefficient calibration-free and as a function of time, thus describing the optical properties in the vis/NIR range of the biomaterial during their processing. As examples of industrial relevance, enzymatic milk coagulation, beer mashing, and algae cultivation in photo bioreactors are discussed.
NASA Astrophysics Data System (ADS)
Ouaras, K.; Magne, L.; Pasquiers, S.; Tardiveau, P.; Jeanney, P.; Bournonville, B.
2018-04-01
The spatiotemporal distributions of the OH radical density are measured using planar laser induced fluorescence in the afterglow of a nanosecond diffuse discharge at atmospheric pressure in humid air. The diffuse discharge is generated between a pin and a grounded plate electrodes within a gap of 18 mm. The high voltage pulse applied to the pin ranges from 65 to 85 kV with a rise time of 2 ns. The specific electrical energy transferred to the gas ranges from 5 to 40 J l‑1. The influence of H2O concentration is studied from 0.5% to 1.5%. An absolute calibration of OH density is performed using a six-level transient rate equation model to simulate the dynamics of OH excitation by the laser, taking into account collisional processes during the optical pumping and the fluorescence. Rayleigh scattering measurements are used to achieve the geometrical part of the calibration. A local maximum of OH density is found in the pin area whatever the operating conditions. For 85 kV and 1% of H2O, this peak reaches a value of 2.0 × 1016 cm‑3 corresponding to 8% of H2O dissociation. The temporal decay of the spatially averaged OH density is found to be similar as in the afterglow of a homogeneous photo-triggered discharge for which a self-consistent modeling is done. These tools are then used to bring discussion elements on OH kinetics.
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Buskirk, Caleb Griffith
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less
Absolute Calibration of Image Plate for electrons at energy between 100 keV and 4 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Back, N L; Eder, D C
2007-12-10
The authors measured the absolute response of image plate (Fuji BAS SR2040) for electrons at energies between 100 keV to 4 MeV using an electron spectrometer. The electron source was produced from a short pulse laser irradiated on the solid density targets. This paper presents the calibration results of image plate Photon Stimulated Luminescence PSL per electrons at this energy range. The Monte Carlo radiation transport code MCNPX results are also presented for three representative incident angles onto the image plates and corresponding electron energies depositions at these angles. These provide a complete set of tools that allows extraction ofmore » the absolute calibration to other spectrometer setting at this electron energy range.« less
Temperature effect on laser-induced breakdown spectroscopy spectra of molten and solid salts
NASA Astrophysics Data System (ADS)
Hanson, Cynthia; Phongikaroon, Supathorn; Scott, Jill R.
2014-07-01
Laser-induced breakdown spectroscopy (LIBS) has been investigated as a potential analytical tool to improve operations and safeguards for electrorefiners, such as those used in processing spent nuclear fuel. This study set out to better understand the effect of sample temperature and physical state on LIBS spectra of molten and solid salts by building calibration curves of cerium and assessing self-absorption, plasma temperature, electron density, and local thermal equilibrium (LTE). Samples were composed of a LiCl-KCl eutectic salt, an internal standard of MnCl2, and varying concentrations of CeCl3 (0.1, 0.3, 0.5, 0.8, and 1.0 wt.% Ce) under different temperatures (773, 723, 673, 623, and 573 K). Analysis of salts in their molten form is preferred as plasma plumes from molten samples experienced less self-absorption, less variability in plasma temperature, and higher clearance of the minimum electron density required for local thermal equilibrium. These differences are attributed to plasma dynamics as a result of phase changes. Spectral reproducibility was also better in the molten state due to sample homogeneity.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Finite Element-Based Mechanical Assessment of Bone Quality on the Basis of In Vivo Images.
Pahr, Dieter H; Zysset, Philippe K
2016-12-01
Beyond bone mineral density (BMD), bone quality designates the mechanical integrity of bone tissue. In vivo images based on X-ray attenuation, such as CT reconstructions, provide size, shape, and local BMD distribution and may be exploited as input for finite element analysis (FEA) to assess bone fragility. Further key input parameters of FEA are the material properties of bone tissue. This review discusses the main determinants of bone mechanical properties and emphasizes the added value, as well as the important assumptions underlying finite element analysis. Bone tissue is a sophisticated, multiscale composite material that undergoes remodeling but exhibits a rather narrow band of tissue mineralization. Mechanically, bone tissue behaves elastically under physiologic loads and yields by cracking beyond critical strain levels. Through adequate cell-orchestrated modeling, trabecular bone tunes its mechanical properties by volume fraction and fabric. With proper calibration, these mechanical properties may be incorporated in quantitative CT-based finite element analysis that has been validated extensively with ex vivo experiments and has been applied increasingly in clinical trials to assess treatment efficacy against osteoporosis.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Absolute Effective Area of the Chandra High-Resolution Mirror Assembly
NASA Technical Reports Server (NTRS)
Schwartz, D. A.; David, L. P.; Donnelly, R. H.; Edgar, R. J.; Gaetz, T. J.; Jerius, D.; Juda, M.; Kellogg, E. M.; McNamara, B. R.; Dewey, D.
2000-01-01
The Chandra X-ray Observatory was launched in July 1999, and is returning exquisite sub-arcsecond x-ray images of star groups, supernova remnants, galaxies, quasars, and clusters of galaxies. In addition to being the premier X-ray observatory in terms of angular and spectral resolution, Chandra is the best calibrated X-ray facility ever flown. We discuss here the calibration of the effective area of the High Resolution Mirror Assembly. Because we do not know the absolute X-ray flux density of any celestial source, this must be based primarily on ground measurements and on modeling. In particular, we must remove the calibrated modeled responses of the detectors and gratings to obtain the mirror area. For celestial sources which may be assumed to have smoothly varying spectra, such as the Crab Nebula, we may verify the continuity of the area calibration as a function of energy. This is of significance in energy regions such as the Ir M-edges, or near the critical grazing angle cutoff of the various mirror shells.
Gregor, M. C.; Boni, R.; Sorce, A.; ...
2016-11-29
Experiments in high-energy-density physics often use optical pyrometry to determine temperatures of dynamically compressed materials. In combination with simultaneous shock-velocity and optical-reflectivity measurements using velocity interferometry, these experiments provide accurate equation-of-state data at extreme pressures (P > 1 Mbar) and temperatures (T > 0.5 eV). This paper reports on the absolute calibration of the streaked optical pyrometer (SOP) at the Omega Laser Facility. The wavelength-dependent system response was determined by measuring the optical emission from a National Institute of Standards and Technology–traceable tungsten-filament lamp through various narrowband (40 nm-wide) filters. The integrated signal over the SOP’s ~250-nm operating range ismore » then related to that of a blackbody radiator using the calibrated response. We present a simple closed-form equation for the brightness temperature as a function of streak-camera signal derived from this calibration. As a result, error estimates indicate that brightness temperature can be inferred to a precision of <5%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, M. C.; Boni, R.; Sorce, A.
Experiments in high-energy-density physics often use optical pyrometry to determine temperatures of dynamically compressed materials. In combination with simultaneous shock-velocity and optical-reflectivity measurements using velocity interferometry, these experiments provide accurate equation-of-state data at extreme pressures (P > 1 Mbar) and temperatures (T > 0.5 eV). This paper reports on the absolute calibration of the streaked optical pyrometer (SOP) at the Omega Laser Facility. The wavelength-dependent system response was determined by measuring the optical emission from a National Institute of Standards and Technology–traceable tungsten-filament lamp through various narrowband (40 nm-wide) filters. The integrated signal over the SOP’s ~250-nm operating range ismore » then related to that of a blackbody radiator using the calibrated response. We present a simple closed-form equation for the brightness temperature as a function of streak-camera signal derived from this calibration. As a result, error estimates indicate that brightness temperature can be inferred to a precision of <5%.« less
Coordinated Observations of AR 11726 by Hinode/EIS and EUNIS-2013
NASA Astrophysics Data System (ADS)
Ancheta, A. J.; Daw, A. N.; Brosius, J. W.
2016-12-01
The Extreme-Ultraviolet Normal-Incidence Spectrograph (EUNIS) sounding rocket payload was flown on 2013 April 23 with two independent channels covering the 300-370 A and 525-635 A wavebands. EUNIS-2013 observed two targets on the solar disk that included quiet sun, active regions, a flare, and a micro-flare. The active region AR 11726 was co-observed with the EUV Imaging Spectrometer (EIS) on Hinode. The radiometric response of EUNIS is measured in the laboratory using a NIST-calibrated photodiode and hollow cathode discharge lamp. A density- and temperature- insensitive line intensity ratio technique can be used to derive an in-flight calibration update of Hinode/EIS. Measurements of EIS emission lines with respect to EUNIS lines, including Fe X to Fe XII and Si X, provide a comparison between the calibrations of the two instruments. The radiometric calibration of EUNIS-2013 is also validated using the same insensitive ratio technique with emission lines such as Mg VIII, Fe XI, Fe XVI, and Si IX.
Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L
2012-01-01
The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123
Characterizing Intra-Urban Air Quality Gradients with a Spatially-Distributed Network
NASA Astrophysics Data System (ADS)
Zimmerman, N.; Ellis, A.; Schurman, M. I.; Gu, P.; Li, H.; Snell, L.; Gu, J.; Subramanian, R.; Robinson, A. L.; Apte, J.; Presto, A. A.
2016-12-01
City-wide air pollution measurements have typically relied on regulatory or research monitoring sites with low spatial density to assess population-scale exposure. However, air pollutant concentrations exhibit significant spatial variability depending on local sources and features of the built environment, which may not be well captured by the existing monitoring regime. To better understand urban spatial and temporal pollution gradients at 1 km resolution, a network of 12 real-time air quality monitoring stations was deployed beginning July 2016 in Pittsburgh, PA. The stations were deployed at sites along an urban-rural transect and in urban locations with a range of traffic, restaurant, and tall building densities to examine the impact of various modifiable factors. Measurements from the stationary monitoring stations were further supported by mobile monitoring, which provided higher spatial resolution pollutant measurements on nearby roadways and enabled routine calibration checks. The stationary monitoring measurements comprise ultrafine particle number (Aerosol Dynamics "MAGIC" CPC), PM2.5 (Met One Neighborhood PM Monitor), black carbon (Met One BC 1050), and a new low-cost air quality monitor, the Real-time Affordable Multi-Pollutant (RAMP) sensor package for measuring CO, NO2, SO2, O3, CO2, temperature and relative humidity. High time-resolution (sub-minute) measurements across the distributed monitoring network enable insight into dynamic pollutant behaviour. Our preliminary findings show that our instruments are sensitive to PM2.5 gradients exceeding 2 micro-grams per cubic meter and ultrafine particle gradients exceeding 1000 particles per cubic centimeter. Additionally, we have developed rigorous calibration protocols to characterize the RAMP sensor response and drift, as well as multiple linear regression models to convert sensor response into pollutant concentrations that are comparable to reference instrumentation.
Comparison of endothelial cell density of organ cultured corneas with cornea donor study.
Campolmi, Nelly; He, Zhiguo; Acquart, Sophie; Trone, Marie-Caroline; Bernard, Aurélien; Gauthier, Anne-Sophie; Garraud, Olivier; Forest, Fabien; Péocʼh, Michel; Gain, Philippe; Thuret, Gilles
2014-06-01
Determination of the endothelial cell density (ECD) by eye banks is paramount in donor cornea qualification. Unbiased measurement avoids wastage and grafts with an increased risk of premature failure. Internal calibration of the counting method is essential, but external validation would add an extra stage in the assessment of reliability. In this respect, data published by the multicenter Cornea Donor Study (CDS) in 2005 is a reference. The aim of the study was to compare ECD determined within a single eye bank, which uses calibrated image analysis software designed for transmitted light microscopy images of organ cultured corneas, with the CDS data determined on specular microscopy images of corneas stored at 4°C. ECD of consecutive corneas retrieved between 2005 and 2013 was determined after exposure to 0.9% NaCl. More than 300 ECs were counted on 3 fields of the central 8 mm. Endothelial cell boundaries were automatically drawn and verified by a skilled technician who performed all necessary corrections. Three thousand fifty-two corneas were analyzed, of which 48.5% donors were >75 years (CDS upper age limit). Between 10 and 75 years, the ECD varied according to donor age exactly in the same manner as in the CDS, but were consistently higher of 100 ± 25 cells per square millimeter (P < 0.001). ECD determined by a computer-aided method from transmitted light microscopy images compares favorably with the American CDS reference series. The slight systematic difference on either side of the Atlantic Ocean could be due to (1) differences in counting principles and/or (2) higher shrinkage of the cornea caused by stromal edema in organ culture.
NASA Astrophysics Data System (ADS)
Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula
2018-03-01
Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.
Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E
2012-01-01
Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.
A Kennicutt-Schmidt relation at molecular cloud scales and beyond
NASA Astrophysics Data System (ADS)
Khoperskov, Sergey A.; Vasiliev, Evgenii O.
2017-06-01
Using N-body/gasdynamic simulations of a Milky Way-like galaxy, we analyse a Kennicutt-Schmidt (KS) relation, Σ _SFR ∝ Σ _gas^N, at different spatial scales. We simulate synthetic observations in CO lines and ultraviolet (UV) band. We adopt the star formation rate (SFR) defined in two ways: based on free fall collapse of a molecular cloud - ΣSFR, cl, and calculated by using a UV flux calibration - ΣSFR,UV. We study a KS relation for spatially smoothed maps with effective spatial resolution from molecular cloud scales to several hundred parsecs. We find that for spatially and kinematically resolved molecular clouds the Σ _{SFR, cl} ∝ σ _{gas}^N relation follows the power law with index N ≈ 1.4. Using UV flux as SFR calibrator, we confirm a systematic offset between the ΣSFR,UV and Σgas distributions on scales compared to molecular cloud sizes. Degrading resolution of our simulated maps for surface densities of gas and SFRs, we establish that there is no relation ΣSFR,UV -Σgas below the resolution ˜50 pc. We find a transition range around scales ˜50-120 pc, where the power-law index N increases from 0 to 1-1.8 and saturates for scales larger ˜120 pc. A value of the index saturated depends on a surface gas density threshold and it becomes steeper for higher Σgas threshold. Averaging over scales with size of ≳ 150 pc the power-law index N equals 1.3-1.4 for surface gas density threshold ˜5 M⊙ pc-2. At scales ≳ 120 pc surface SFR densities determined by using CO data and UV flux, ΣSFR,UV/SFR, cl, demonstrate a discrepancy about a factor of 3. We argue that this may be originated from overestimating (constant) values of conversion factor, star formation efficiency or UV calibration used in our analysis.
The effect of gas physics on the halo mass function
NASA Astrophysics Data System (ADS)
Stanek, R.; Rudd, D.; Evrard, A. E.
2009-03-01
Cosmological tests based on cluster counts require accurate calibration of the space density of massive haloes, but most calibrations to date have ignored complex gas physics associated with halo baryons. We explore the sensitivity of the halo mass function to baryon physics using two pairs of gas-dynamic simulations that are likely to bracket the true behaviour. Each pair consists of a baseline model involving only gravity and shock heating, and a refined physics model aimed at reproducing the observed scaling of the hot, intracluster gas phase. One pair consists of billion-particle resimulations of the original 500h-1Mpc Millennium Simulation of Springel et al., run with the smoothed particle hydrodynamics (SPH) code GADGET-2 and using a refined physics treatment approximated by pre-heating (PH) at high redshift. The other pair are high-resolution simulations from the adaptive-mesh refinement code ART, for which the refined treatment includes cooling, star formation and supernova feedback (CSF). We find that, although the mass functions of the gravity-only (GO) treatments are consistent with the recent calibration of Tinker et al. (2008), both pairs of simulations with refined baryon physics show significant deviations. Relative to the GO case, the masses of ~1014h-1Msolar haloes in the PH and CSF treatments are shifted by the averages of -15 +/- 1 and +16 +/- 2 per cent, respectively. These mass shifts cause ~30 per cent deviations in number density relative to the Tinker function, significantly larger than the 5 per cent statistical uncertainty of that calibration.
NASA Astrophysics Data System (ADS)
Ma, Rui; Zheng, Chunmiao; Zachara, John M.; Tonkin, Matthew
2012-08-01
A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heat plume movement. Moreover, the temperature data contained "noise" caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.
NASA Astrophysics Data System (ADS)
Li, N.; Yue, X. Y.
2018-03-01
Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Rui; Zheng, Chunmiao; Zachara, John M.
A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heatmore » plume movement. Moreover, the temperature data contained “noise” caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.« less
Dosimetric accuracy of Kodak EDR2 film for IMRT verifications.
Childress, Nathan L; Salehpour, Mohammad; Dong, Lei; Bloch, Charles; White, R Allen; Rosen, Isaac I
2005-02-01
Patient-specific intensity-modulated radiotherapy (IMRT) verifications require an accurate two-dimensional dosimeter that is not labor-intensive. We assessed the precision and reproducibility of film calibrations over time, measured the elemental composition of the film, measured the intermittency effect, and measured the dosimetric accuracy and reproducibility of calibrated Kodak EDR2 film for single-beam verifications in a solid water phantom and for full-plan verifications in a Rexolite phantom. Repeated measurements of the film sensitometric curve in a single experiment yielded overall uncertainties in dose of 2.1% local and 0.8% relative to 300 cGy. 547 film calibrations over an 18-month period, exposed to a range of doses from 0 to a maximum of 240 MU or 360 MU and using 6 MV or 18 MV energies, had optical density (OD) standard deviations that were 7%-15% of their average values. This indicates that daily film calibrations are essential when EDR2 film is used to obtain absolute dose results. An elemental analysis of EDR2 film revealed that it contains 60% as much silver and 20% as much bromine as Kodak XV2 film. EDR2 film also has an unusual 1.69:1 silver:halide molar ratio, compared with the XV2 film's 1.02:1 ratio, which may affect its chemical reactions. To test EDR2's intermittency effect, the OD generated by a single 300 MU exposure was compared to the ODs generated by exposing the film 1 MU, 2 MU, and 4 MU at a time to a total of 300 MU. An ion chamber recorded the relative dose of all intermittency measurements to account for machine output variations. Using small MU bursts to expose the film resulted in delivery times of 4 to 14 minutes and lowered the film's OD by approximately 2% for both 6 and 18 MV beams. This effect may result in EDR2 film underestimating absolute doses for patient verifications that require long delivery times. After using a calibration to convert EDR2 film's OD to dose values, film measurements agreed within 2% relative difference and 2 mm criteria to ion chamber measurements for both sliding window and step-and-shoot fluence map verifications. Calibrated film results agreed with ion chamber measurements to within 5 % /2 mm criteria for transverse-plane full-plan verifications, but were consistently low. When properly calibrated, EDR2 film can be an adequate two-dimensional dosimeter for IMRT verifications, although it may underestimate doses in regions with long exposure times.
NASA Astrophysics Data System (ADS)
Boersma, C.; Bregman, J.; Allamandola, L. J.
2015-06-01
Polycyclic aromatic hydrocarbon (PAH) emission in the Spitzer/IRS spectral map of the northwest photon dominated region (PDR) in NGC 7023 is analyzed. Here, results from fitting the 5.2-14.5 μm spectrum at each pixel using exclusively PAH spectra from the NASA Ames PAH IR Spectroscopic Database (www.astrochem.org/pahdb/) and observed PAH band strength ratios, determined after isolating the PAH bands, are combined. This enables the first quantitative and spectrally consistent calibration of PAH charge proxies. Calibration is straightforward because the 6.2/11.2 μm PAH band strength ratio varies linearly with the ionized fraction (PAH ionization parameter) as determined from the intrinsic properties of the individual PAHs comprising the database. This, in turn, can be related to the local radiation field, electron density, and temperature. From these relations diagnostic templates are developed to deduce the PAH ionization fraction and astronomical environment in other objects. The commonly used 7.7/11.2 μm PAH band strength ratio fails as a charge proxy over a significant fraction of the nebula. The 11.2/12.7 μm PAH band strength ratio, commonly used as a PAH erosion indicator, is revealed to be a better tracer for PAH charge across NGC 7023. Attempting to calibrate the 12.7/11.2 μm PAH band strength ratio against the PAH hydrogen adjacency ratio (duo+trio)/solo is, unexpectedly, anti-correlated. This work both validates and extends the results from Paper I and Paper II.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Neumann, Gregory A.; Abshire, James B.; Zuber, Maria T.
2005-01-01
The Mars Orbiter Laser Altimeter not only provides surface topography from the laser pulse time-of-flight, but also two radiometric measurements, the active measurement of transmitted and reflected laser pulse energy, and the passive measurement of reflected solar illumination. The passive radiometry measurement is accomplished in a novel fashion by monitoring the noise density at the output of the photodetector and solving for the amount of background light. The passive radiometry measurements provide images of Mars at 1064-nm wavelength over a 2 nm bandwidth with sub-km spatial resolution and with 2% or better precision under full illumination. We describe in this paper the principle of operation, the receiver mathematical model, its calibration, and performance assessment from sample measurement data.
Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions
ERIC Educational Resources Information Center
Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.
2003-01-01
Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…
Landsat-7 Enhanced Thematic Mapper plus radiometric calibration
Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.
1997-01-01
Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.
Measurements of Flow Turbulence in the NASA Langley Transonic Dynamics Tunnel
NASA Technical Reports Server (NTRS)
Wiesman, Carol D.; Sleeper, Robert K.
2005-01-01
An assessment of the flow turbulence in the NASA Langley Transonic Dynamics Tunnel (TDT) was conducted during calibration activities following the facility conversion from a Freon-12 heavy-gas test medium to an R134a heavy-gas test medium. Total pressure, static pressure, and acoustic pressure levels were measured at several locations on a stingmounted rake. The test measured wall static pressures at several locations although this paper presents only those from one location. The test used two data acquisition systems, one sampling at 1000 Hz and the second sampling at 125 000 Hz, for acquiring time-domain data. This paper presents standard deviations and power spectral densities of the turbulence points throughout the wind tunnel envelope in air and R134a. The objective of this paper is to present the turbulence characteristics for the test section. No attempt is made to assess the causes of the turbulence. The present paper looks at turbulence in terms of pressure fluctuations. Reference 1 looked at tunnel turbulence in terms of velocity fluctuations.
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Parkinson, Meghan M.
2013-01-01
Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…
Device for determining frost depth and density
NASA Technical Reports Server (NTRS)
Huneidi, F.
1983-01-01
A hand held device having a forward open window portion adapted to be pushed downwardly into the frost on a surface, and a rear container portion adapted to receive the frost removed from the window area are described. A graph on a side of the container enables an observer to determine the density of the frost from certain measurements noted. The depth of the frost is noted from calibrated lines on the sides of the open window portion.
SU-C-213-02: Characterizing 3D Printing in the Fabrication of Variable Density Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madamesila, J; McGeachy, P; Villarreal-Barajas, J
Purpose: In this work, we present characterization, process flow, quality control and application of 3D fabricated low density phantoms for radiotherapy quality assurance. Methods: A Rostock delta 3D printer using polystyrene filament of diameter 1.75 mm was used to print geometric volumes of 2×2×1 cm{sup 3} of varying densities. The variable densities of 0.1 to 0.75 g/cm {sup 3} were created by modulating the infill. A computed tomography (CT) scan was performed to establish an infill-density calibration curve as well as characterize the quality of the print such as uniformity and the infill pattern. The time required to print thesemore » volumes was also recorded. Using the calibration, two low density cones (0.19, 0.52 g/cm{sup 3}) were printed and benchmarked against commercially available phantoms. The dosimetric validation of the low density scaling of Anisotropic Analytical Algorithm (AAA) was performed by using a 0.5 g/cm{sup 3} slab of 10×10×2.4 cm{sup 3} with EBT3 GafChromic film. The gamma analysis at 3%/3mm criteria were compared for the measured and computed dose planes. Results: Analysis of the volume of air pockets in the infill resulted in a reasonable uniformity for densities 0.4 to 0.75 g/cm{sup 3}. Printed phantoms with densities below 0.4 g/cm{sup 3} exhibited a higher ratio of air to polystyrene resulting in large non-uniformity. Compared to the commercial inserts, good agreement was observed only for the printed 0.52 g/cm{sup 3} cone. Dosimetric comparison for a printed low density volume placed in-between layers of solid water resulted in >95% gamma agreement between AAA calculated dose planes and measured EBT3 films for a 6MV 5×5 cm{sup 2} clinical beam. The comparison showed disagreement in the penumbra region. Conclusion: In conclusion, 3D printing technology opens the door to desktop fabrication of variable density phantoms at economical prices in an efficient manner for the quality assurance needs of a small clinic.« less
EUNIS Underflight Calibrations of CDS, EIT, TRACE, EIS, and EUVI
NASA Technical Reports Server (NTRS)
Thomas, Roger J.; Wang, Tongjiang; Rabin, Douglas M.; Jess, David B.
2008-01-01
The Extreme-Ultraviolet Normal-Incidence Spectrograph (EUNIS) is a sounding rocket instrument that obtains imaged high-resolution solar spectra. It has now had two successful flights, on 2006 April 12 and 2007 November 16, providing data to support underflight calibrations for a number of orbiting solar experiments on both occasions. A regular part of each campaign is the end-to-end radiometric calibration of the rocket payload carried out at RAL in the UK, using the same facility that provided pre-flight CDS and EIS calibrations. The measurements, traceable to primary radiometric standards, can establish the absolute EUNIS response within a total uncertainty of 10% over its full longwave bandpass of 300-370A. During each EUNIS flight, coordinated observations are made of overlapping solar locations by all participating space experiments, and identified by subsequent image co-registrations, allowing the EUNIS calibrations to be applied to these other instruments as well. The calibration transfer is straightforward for wavelengths within the EUNIS LW bandpass, and is extended to other wavelengths by means of a series of temperature- and density-insensitive line-ratios, with one line of each pair in the calibrated band and the other in the transfer band. In this way, the EUNIS-06 flight is able to update the radiometric calibrations of CDS NISl (and 2nd-order NIS2 near 2x304A), all four channels of EIT, and the three EUV channels of TRACE. The EUNIS-07 flight will further update those missions, as well as both channels of Hinode/EIS and all four channels of STEREO/SECCHI/EUVI. Future EUNIS flights have been proposed that will continue this underflight calibration service. EUNIS is supported by the NASA Heliophysics Division through its Low Cost Access to Space Program in Solar and Heliospheric Physics.
Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD
NASA Astrophysics Data System (ADS)
Giudicotti, L.; Pasqualotto, R.; Fassina, A.
2014-11-01
In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature Te, the electron density ne and the relative calibration coefficients of spectral channels sensitivity Ci were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual-angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj
2016-05-01
The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.
Hot-wire calibration in subsonic/transonic flow regimes
NASA Technical Reports Server (NTRS)
Nagabushana, K. A.; Ash, Robert L.
1995-01-01
A different approach for calibrating hot-wires, which simplifies the calibration procedure and reduces the tunnel run-time by an order of magnitude was sought. In general, it is accepted that the directly measurable quantities in any flow are velocity, density, and total temperature. Very few facilities have the capability of varying the total temperature over an adequate range. However, if the overheat temperature parameter, a(sub w), is used to calibrate the hot-wire then the directly measurable quantity, voltage, will be a function of the flow variables and the overheat parameter i.e., E = f(u,p,a(sub w), T(sub w)) where a(sub w) will contain the needed total temperature information. In this report, various methods of evaluating sensitivities with different dependent and independent variables to calibrate a 3-Wire hot-wire probe using a constant temperature anemometer (CTA) in subsonic/transonic flow regimes is presented. The advantage of using a(sub w) as the independent variable instead of total temperature, t(sub o), or overheat temperature parameter, tau, is that while running a calibration test it is not necessary to know the recovery factor, the coefficients in a wire resistance to temperature relationship for a given probe. It was deduced that the method employing the relationship E = f (u,p,a(sub w)) should result in the most accurate calibration of hot wire probes. Any other method would require additional measurements. Also this method will allow calibration and determination of accurate temperature fluctuation information even in atmospheric wind tunnels where there is no ability to obtain any temperature sensitivity information at present. This technique greatly simplifies the calibration process for hot-wires, provides the required calibration information needed in obtaining temperature fluctuations, and reduces both the tunnel run-time and the test matrix required to calibrate hotwires. Some of the results using the above techniques are presented in an appendix.
A calibration method for patient specific IMRT QA using a single therapy verification film
Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.
2013-01-01
Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558
Solid energy calibration standards for P K-edge XANES: electronic structure analysis of PPh4Br.
Blake, Anastasia V; Wei, Haochuan; Donahue, Courtney M; Lee, Kyounghoon; Keith, Jason M; Daly, Scott R
2018-03-01
P K-edge X-ray absorption near-edge structure (XANES) spectroscopy is a powerful method for analyzing the electronic structure of organic and inorganic phosphorus compounds. Like all XANES experiments, P K-edge XANES requires well defined and readily accessible calibration standards for energy referencing so that spectra collected at different beamlines or under different conditions can be compared. This is especially true for ligand K-edge X-ray absorption spectroscopy, which has well established energy calibration standards for Cl (Cs 2 CuCl 4 ) and S (Na 2 S 2 O 3 ·5H 2 O), but not neighboring P. This paper presents a review of common P K-edge XANES energy calibration standards and analysis of PPh 4 Br as a potential alternative. The P K-edge XANES region of commercially available PPh 4 Br revealed a single, highly resolved pre-edge feature with a maximum at 2146.96 eV. PPh 4 Br also showed no evidence of photodecomposition when repeatedly scanned over the course of several days. In contrast, we found that PPh 3 rapidly decomposes under identical conditions. Density functional theory calculations performed on PPh 3 and PPh 4 + revealed large differences in the molecular orbital energies that were ascribed to differences in the phosphorus oxidation state (III versus V) and molecular charge (neutral versus +1). Time-dependent density functional theory calculations corroborated the experimental data and allowed the spectral features to be assigned. The first pre-edge feature in the P K-edge XANES spectrum of PPh 4 Br was assigned to P 1s → P-C π* transitions, whereas those at higher energy were P 1s → P-C σ*. Overall, the analysis suggests that PPh 4 Br is an excellent alternative to other solid energy calibration standards commonly used in P K-edge XANES experiments.
NASA Technical Reports Server (NTRS)
Cohen, Martin; Witteborn, Fred C.; Carbon, Duane F.; Davies, John K.; Wooden, Diane H.; Bregman, Jesse D.
1996-01-01
We present five new absolutely calibrated continuous stellar spectra constructed as far as possible from spectral fragments observed from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer. These stars-alpha Boo, gamma Dra, alpha Cet, gamma Cru, and mu UMa-augment our six, published, absolutely calibrated spectra of K and early-M giants. All spectra have a common calibration pedigree. A revised composite for alpha Boo has been constructed from higher quality spectral fragments than our previously published one. The spectrum of gamma Dra was created in direct response to the needs of instruments aboard the Infrared Space Observatory (ISO); this star's location near the north ecliptic pole renders it highly visible throughout the mission. We compare all our low-resolution composite spectra with Kurucz model atmospheres and find good agreement in shape, with the obvious exception of the SiO fundamental, still lacking in current grids of model atmospheres. The CO fundamental seems slightly too deep in these models, but this could reflect our use of generic models with solar metal abundances rather than models specific to the metallicities of the individual stars. Angular diameters derived from these spectra and models are in excellent agreement with the best observed diameters. The ratio of our adopted Sirius and Vega models is vindicated by spectral observations. We compare IRAS fluxes predicted from our cool stellar spectra with those observed and conclude that, at 12 and 25 microns, flux densities measured by IRAS should be revised downwards by about 4.1% and 5.7%, respectively, for consistency with our absolute calibration. We have provided extrapolated continuum versions of these spectra to 300 microns, in direct support of ISO (PHT and LWS instruments). These spectra are consistent with IRAS flux densities at 60 and 100 microns.
Assessment of MODIS RSB Detector Uniformity Using Deep Convective Clouds
NASA Technical Reports Server (NTRS)
Chang, Tiejun; Xiong, Xiaoxiong (Jack); Angal, Amit; Mu, Qiaozhen
2016-01-01
For satellite sensor, the striping observed in images is typically associated with the relative multiple detector gain difference derived from the calibration. A method using deep convective cloud (DCC) measurements to assess the difference among detectors after calibration is proposed and demonstrated for select reflective solar bands (RSBs) of the Moderate Resolution Imaging Spectroradiometer (MODIS). Each detector of MODIS RSB is calibrated independently using a solar diffuser (SD). Although the SD is expected to accurately characterize detector response, the uncertainties associated with the SD degradation and characterization result in inadequacies in the estimation of each detector's gain. This work takes advantage of the DCC technique to assess detector uniformity and scan mirror side difference for RSB. The detector differences for Terra MODIS Collection 6 are less than 1% for bands 1, 3-5, and 18 and up to 2% for bands 6, 19, and 26. The largest difference is up to 4% for band 7. Most Aqua bands have detector differences less than 0.5% except bands 19 and 26 with up to 1.5%. Normally, large differences occur for edge detectors. The long-term trending shows seasonal oscillations in detector differences for some bands, which are correlated with the instrument temperature. The detector uniformities were evaluated for both unaggregated and aggregated detectors for MODIS band 1 and bands 3-7, and their consistencies are verified. The assessment results were validated by applying a direct correction to reflectance images. These assessments can lead to improvements to the calibration algorithm and therefore a reduction in striping observed in the calibrated imagery.
The Measurement of Magnetic Fields
ERIC Educational Resources Information Center
Berridge, H. J. J.
1973-01-01
Discusses five experimental methods used by senior high school students to provide an accurate calibration curve of magnet current against the magnetic flux density produced by an electromagnet. Compares the relative merits of the five methods, both as measurements and from an educational viewpoint. (JR)
Single x-ray transmission system for bone mineral density determination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez-Mendoza, Daniel; Vargas-Vazquez, Damian; Espinosa-Arbelaez, Diego G.
2011-12-15
Bones are the support of the body. They are composed of many inorganic compounds and other organic materials that all together can be used to determine the mineral density of the bones. The bone mineral density is a measure index that is widely used as an indicator of the health of the bone. A typical manner to evaluate the quality of the bone is a densitometry study; a dual x-ray absorptiometry system based study that has been widely used to assess the mineral density of some animals' bones. However, despite the success stories of utilizing these systems in many differentmore » applications, it is a very expensive method that requires frequent calibration processes to work properly. Moreover, its usage in small species applications (e.g., rodents) has not been quite demonstrated yet. Following this argument, it is suggested that there is a need for an instrument that would perform such a task in a more reliable and economical manner. Therefore, in this paper we explore the possibility to develop a new, affordable, and reliable single x-ray absorptiometry system. The method consists of utilizing a single x-ray source, an x-ray image sensor, and a computer platform that all together, as a whole, will allow us to calculate the mineral density of the bone. Utilizing an x-ray transmission theory modified through a version of the Lambert-Beer law equation, a law that expresses the relationship among the energy absorbed, the thickness, and the absorption coefficient of the sample at the x-rays wavelength to calculate the mineral density of the bone can be advantageous. Having determined the parameter equation that defines the ratio of the pixels in radiographies and the bone mineral density [measured in mass per unit of area (g/cm{sup 2})], we demonstrated the utility of our novel methodology by calculating the mineral density of Wistar rats' femur bones.« less
Single x-ray transmission system for bone mineral density determination
NASA Astrophysics Data System (ADS)
Jimenez-Mendoza, Daniel; Espinosa-Arbelaez, Diego G.; Giraldo-Betancur, Astrid L.; Hernandez-Urbiola, Margarita I.; Vargas-Vazquez, Damian; Rodriguez-Garcia, Mario E.
2011-12-01
Bones are the support of the body. They are composed of many inorganic compounds and other organic materials that all together can be used to determine the mineral density of the bones. The bone mineral density is a measure index that is widely used as an indicator of the health of the bone. A typical manner to evaluate the quality of the bone is a densitometry study; a dual x-ray absorptiometry system based study that has been widely used to assess the mineral density of some animals' bones. However, despite the success stories of utilizing these systems in many different applications, it is a very expensive method that requires frequent calibration processes to work properly. Moreover, its usage in small species applications (e.g., rodents) has not been quite demonstrated yet. Following this argument, it is suggested that there is a need for an instrument that would perform such a task in a more reliable and economical manner. Therefore, in this paper we explore the possibility to develop a new, affordable, and reliable single x-ray absorptiometry system. The method consists of utilizing a single x-ray source, an x-ray image sensor, and a computer platform that all together, as a whole, will allow us to calculate the mineral density of the bone. Utilizing an x-ray transmission theory modified through a version of the Lambert-Beer law equation, a law that expresses the relationship among the energy absorbed, the thickness, and the absorption coefficient of the sample at the x-rays wavelength to calculate the mineral density of the bone can be advantageous. Having determined the parameter equation that defines the ratio of the pixels in radiographies and the bone mineral density [measured in mass per unit of area (g/cm2)], we demonstrated the utility of our novel methodology by calculating the mineral density of Wistar rats' femur bones.
Methodology for the development and calibration of the SCI-QOL item banks
Tulsky, David S.; Kisala, Pamela A.; Victorson, David; Choi, Seung W.; Gershon, Richard; Heinemann, Allen W.; Cella, David
2015-01-01
Objective To develop a comprehensive, psychometrically sound, and conceptually grounded patient reported outcomes (PRO) measurement system for individuals with spinal cord injury (SCI). Methods Individual interviews (n = 44) and focus groups (n = 65 individuals with SCI and n = 42 SCI clinicians) were used to select key domains for inclusion and to develop PRO items. Verbatim items from other cutting-edge measurement systems (i.e. PROMIS, Neuro-QOL) were included to facilitate linkage and cross-population comparison. Items were field tested in a large sample of individuals with traumatic SCI (n = 877). Dimensionality was assessed with confirmatory factor analysis. Local item dependence and differential item functioning were assessed, and items were calibrated using the item response theory (IRT) graded response model. Finally, computer adaptive tests (CATs) and short forms were administered in a new sample (n = 245) to assess test-retest reliability and stability. Participants and Procedures A calibration sample of 877 individuals with traumatic SCI across five SCI Model Systems sites and one Department of Veterans Affairs medical center completed SCI-QOL items in interview format. Results We developed 14 unidimensional calibrated item banks and 3 calibrated scales across physical, emotional, and social health domains. When combined with the five Spinal Cord Injury – Functional Index physical function banks, the final SCI-QOL system consists of 22 IRT-calibrated item banks/scales. Item banks may be administered as CATs or short forms. Scales may be administered in a fixed-length format only. Conclusions The SCI-QOL measurement system provides SCI researchers and clinicians with a comprehensive, relevant and psychometrically robust system for measurement of physical-medical, physical-functional, emotional, and social outcomes. All SCI-QOL instruments are freely available on Assessment CenterSM. PMID:26010963
Methodology for the development and calibration of the SCI-QOL item banks.
Tulsky, David S; Kisala, Pamela A; Victorson, David; Choi, Seung W; Gershon, Richard; Heinemann, Allen W; Cella, David
2015-05-01
To develop a comprehensive, psychometrically sound, and conceptually grounded patient reported outcomes (PRO) measurement system for individuals with spinal cord injury (SCI). Individual interviews (n=44) and focus groups (n=65 individuals with SCI and n=42 SCI clinicians) were used to select key domains for inclusion and to develop PRO items. Verbatim items from other cutting-edge measurement systems (i.e. PROMIS, Neuro-QOL) were included to facilitate linkage and cross-population comparison. Items were field tested in a large sample of individuals with traumatic SCI (n=877). Dimensionality was assessed with confirmatory factor analysis. Local item dependence and differential item functioning were assessed, and items were calibrated using the item response theory (IRT) graded response model. Finally, computer adaptive tests (CATs) and short forms were administered in a new sample (n=245) to assess test-retest reliability and stability. A calibration sample of 877 individuals with traumatic SCI across five SCI Model Systems sites and one Department of Veterans Affairs medical center completed SCI-QOL items in interview format. We developed 14 unidimensional calibrated item banks and 3 calibrated scales across physical, emotional, and social health domains. When combined with the five Spinal Cord Injury--Functional Index physical function banks, the final SCI-QOL system consists of 22 IRT-calibrated item banks/scales. Item banks may be administered as CATs or short forms. Scales may be administered in a fixed-length format only. The SCI-QOL measurement system provides SCI researchers and clinicians with a comprehensive, relevant and psychometrically robust system for measurement of physical-medical, physical-functional, emotional, and social outcomes. All SCI-QOL instruments are freely available on Assessment CenterSM.
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...
2017-02-17
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohm, P., E-mail: bohm@ipp.cas.cz; Bilkova, P.; Melich, R.
2014-11-15
The core Thomson scattering diagnostic (TS) on the COMPASS tokamak was put in operation and reported earlier. Implementation of edge TS, with spatial resolution along the laser beam up to ∼1/100 of the tokamak minor radius, is presented now. The procedure for spatial calibration and alignment of both core and edge systems is described. Several further upgrades of the TS system, like a triggering unit and piezo motor driven vacuum window shutter, are introduced as well. The edge TS system, together with the core TS, is now in routine operation and provides electron temperature and density profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
2013-01-01
Background The extent to which psychosocial and diet behavior factors affect dietary self-report remains unclear. We examine the contribution of these factors to measurement error of self-report. Methods In 450 postmenopausal women in the Women’s Health Initiative Observational Study doubly labeled water and urinary nitrogen were used as biomarkers of objective measures of total energy expenditure and protein. Self-report was captured from food frequency questionnaire (FFQ), four day food record (4DFR) and 24 hr. dietary recall (24HR). Using regression calibration we estimated bias of self-reported dietary instruments including psychosocial factors from the Stunkard-Sorenson Body Silhouettes for body image perception, the Crowne-Marlowe Social Desirability Scale, and the Three Factor Eating Questionnaire (R-18) for cognitive restraint for eating, uncontrolled eating, and emotional eating. We included a diet behavior factor on number of meals eaten at home using the 4DFR. Results Three categories were defined for each of the six psychosocial and diet behavior variables (low, medium, high). Participants with high social desirability scores were more likely to under-report on the FFQ for energy (β = -0.174, SE = 0.054, p < 0.05) and protein intake (β = -0.142, SE = 0.062, p < 0.05) compared to participants with low social desirability scores. Participants consuming a high percentage of meals at home were less likely to under-report on the FFQ for energy (β = 0.181, SE = 0.053, p < 0.05) and protein (β = 0.127, SE = 0.06, p < 0.05) compared to participants consuming a low percentage of meals at home. In the calibration equations combining FFQ, 4DFR, 24HR with age, body mass index, race, and the psychosocial and diet behavior variables, the six psychosocial and diet variables explained 1.98%, 2.24%, and 2.15% of biomarker variation for energy, protein, and protein density respectively. The variations explained are significantly different between the calibration equations with or without the six psychosocial and diet variables for protein density (p = 0.02), but not for energy (p = 0.119) or protein intake (p = 0.077). Conclusions The addition of psychosocial and diet behavior factors to calibration equations significantly increases the amount of total variance explained for protein density and their inclusion would be expected to strengthen the precision of calibration equations correcting self-report for measurement error. Trial registration ClinicalTrials.gov identifier: NCT00000611 PMID:23679960
Mossavar-Rahmani, Yasmin; Tinker, Lesley F; Huang, Ying; Neuhouser, Marian L; McCann, Susan E; Seguin, Rebecca A; Vitolins, Mara Z; Curb, J David; Prentice, Ross L
2013-05-16
The extent to which psychosocial and diet behavior factors affect dietary self-report remains unclear. We examine the contribution of these factors to measurement error of self-report. In 450 postmenopausal women in the Women's Health Initiative Observational Study doubly labeled water and urinary nitrogen were used as biomarkers of objective measures of total energy expenditure and protein. Self-report was captured from food frequency questionnaire (FFQ), four day food record (4DFR) and 24 hr. dietary recall (24HR). Using regression calibration we estimated bias of self-reported dietary instruments including psychosocial factors from the Stunkard-Sorenson Body Silhouettes for body image perception, the Crowne-Marlowe Social Desirability Scale, and the Three Factor Eating Questionnaire (R-18) for cognitive restraint for eating, uncontrolled eating, and emotional eating. We included a diet behavior factor on number of meals eaten at home using the 4DFR. Three categories were defined for each of the six psychosocial and diet behavior variables (low, medium, high). Participants with high social desirability scores were more likely to under-report on the FFQ for energy (β = -0.174, SE = 0.054, p < 0.05) and protein intake (β = -0.142, SE = 0.062, p < 0.05) compared to participants with low social desirability scores. Participants consuming a high percentage of meals at home were less likely to under-report on the FFQ for energy (β = 0.181, SE = 0.053, p < 0.05) and protein (β = 0.127, SE = 0.06, p < 0.05) compared to participants consuming a low percentage of meals at home. In the calibration equations combining FFQ, 4DFR, 24HR with age, body mass index, race, and the psychosocial and diet behavior variables, the six psychosocial and diet variables explained 1.98%, 2.24%, and 2.15% of biomarker variation for energy, protein, and protein density respectively. The variations explained are significantly different between the calibration equations with or without the six psychosocial and diet variables for protein density (p = 0.02), but not for energy (p = 0.119) or protein intake (p = 0.077). The addition of psychosocial and diet behavior factors to calibration equations significantly increases the amount of total variance explained for protein density and their inclusion would be expected to strengthen the precision of calibration equations correcting self-report for measurement error. ClinicalTrials.gov identifier: NCT00000611.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Davies, Stephen R; Jones, Kai; Goldys, Anna; Alamgir, Mahuiddin; Chan, Benjamin K H; Elgindy, Cecile; Mitchell, Peter S R; Tarrant, Gregory J; Krishnaswami, Maya R; Luo, Yawen; Moawad, Michael; Lawes, Douglas; Hook, James M
2015-04-01
Quantitative NMR spectroscopy (qNMR) has been examined for purity assessment using a range of organic calibration standards of varying structural complexities, certified using the traditional mass balance approach. Demonstrated equivalence between the two independent purity values confirmed the accuracy of qNMR and highlighted the benefit of using both methods in tandem to minimise the potential for hidden bias, thereby conferring greater confidence in the overall purity assessment. A comprehensive approach to purity assessment is detailed, utilising, where appropriate, multiple peaks in the qNMR spectrum, chosen on the basis of scientific reason and statistical analysis. Two examples are presented in which differences between the purity assignment by qNMR and mass balance are addressed in different ways depending on the requirement of the end user, affording fit-for-purpose calibration standards in a cost-effective manner.
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Calibration of LR-115 for 222Rn monitoring taking into account the plateout effect.
Da Silva, A A R; Yoshimura, E M
2003-01-01
The dose received by people exposed to indoor radon is mainly due to radon progeny. This fact points to the establishment of techniques that access either radon and progeny together, or only radon progeny concentration. In this work a low cost and easy to use methodology is presented to determine the total indoor alpha emission concentration. It is based on passive detection using LR-115 and CR-39 detectors, taking into account the plateout effect. A calibration of LR-115 track density response was done by indoor exposure in controlled environments and dwellings, places where 222Rn and progeny concentration were measured with CR-39. The calibration factor obtained showed great dependence on the ambient condition: (0.69 +/- 0.04) cm for controlled environments and (0.43 +/- 0.03) cm for dwellings.
NASA Astrophysics Data System (ADS)
Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore
2013-01-01
paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.
CALIPSO lidar calibration at 532 nm: version 4 nighttime algorithm
NASA Astrophysics Data System (ADS)
Kar, Jayanta; Vaughan, Mark A.; Lee, Kam-Pui; Tackett, Jason L.; Avery, Melody A.; Garnier, Anne; Getzewich, Brian J.; Hunt, William H.; Josset, Damien; Liu, Zhaoyan; Lucker, Patricia L.; Magill, Brian; Omar, Ali H.; Pelon, Jacques; Rogers, Raymond R.; Toth, Travis D.; Trepte, Charles R.; Vernier, Jean-Paul; Winker, David M.; Young, Stuart A.
2018-03-01
Data products from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on board Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) were recently updated following the implementation of new (version 4) calibration algorithms for all of the Level 1 attenuated backscatter measurements. In this work we present the motivation for and the implementation of the version 4 nighttime 532 nm parallel channel calibration. The nighttime 532 nm calibration is the most fundamental calibration of CALIOP data, since all of CALIOP's other radiometric calibration procedures - i.e., the 532 nm daytime calibration and the 1064 nm calibrations during both nighttime and daytime - depend either directly or indirectly on the 532 nm nighttime calibration. The accuracy of the 532 nm nighttime calibration has been significantly improved by raising the molecular normalization altitude from 30-34 km to the upper possible signal acquisition range of 36-39 km to substantially reduce stratospheric aerosol contamination. Due to the greatly reduced molecular number density and consequently reduced signal-to-noise ratio (SNR) at these higher altitudes, the signal is now averaged over a larger number of samples using data from multiple adjacent granules. Additionally, an enhanced strategy for filtering the radiation-induced noise from high-energy particles was adopted. Further, the meteorological model used in the earlier versions has been replaced by the improved Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), model. An aerosol scattering ratio of 1.01 ± 0.01 is now explicitly used for the calibration altitude. These modifications lead to globally revised calibration coefficients which are, on average, 2-3 % lower than in previous data releases. Further, the new calibration procedure is shown to eliminate biases at high altitudes that were present in earlier versions and consequently leads to an improved representation of stratospheric aerosols. Validation results using airborne lidar measurements are also presented. Biases relative to collocated measurements acquired by the Langley Research Center (LaRC) airborne High Spectral Resolution Lidar (HSRL) are reduced from 3.6 % ± 2.2 % in the version 3 data set to 1.6 % ± 2.4 % in the version 4 release.
EARLY SCIENCE WITH THE KOREAN VLBI NETWORK: THE QCAL-1 43 GHz CALIBRATOR SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrov, Leonid; Lee, Sang-Sung; Kim, Jongsoo
2012-11-01
This paper presents the catalog of correlated flux densities in three ranges of baseline projection lengths of 637 sources from a 43 GHz (Q band) survey observed with the Korean VLBI Network. Of them, 14 objects used as calibrators were previously observed, but 623 sources have not been observed before in the Q band with very long baseline interferometry (VLBI). The goal of this work in the early science phase of the new VLBI array is twofold: to evaluate the performance of the new instrument that operates in a frequency range of 22-129 GHz and to build a list ofmore » objects that can be used as targets and as calibrators. We have observed the list of 799 target sources with declinations down to -40 Degree-Sign . Among them, 724 were observed before with VLBI at 22 GHz and had correlated flux densities greater than 200 mJy. The overall detection rate is 78%. The detection limit, defined as the minimum flux density for a source to be detected with 90% probability in a single observation, was in the range of 115-180 mJy depending on declination. However, some sources as weak as 70 mJy have been detected. Of 623 detected sources, 33 objects are detected for the first time in VLBI mode. We determined their coordinates with a median formal uncertainty of 20 mas. The results of this work set the basis for future efforts to build the complete flux-limited sample of extragalactic sources at frequencies of 22 GHz and higher at 3/4 of the celestial sphere.« less
Klapperstück, Thomas; Glanz, Dagobert; Hanitsch, Stefan; Klapperstück, Manuela; Markwardt, Fritz; Wohlrab, Johannes
2013-07-01
Quantitative determinations of the cell membrane potential of lymphocytes (Wilson et al., J Cell Physiol 1985;125:72-81) and thymocytes (Krasznai et al., J Photochem Photobiol B 1995;28:93-99) using the anionic dye DiBAC4 (3) proved that dye depletion in the extracellular medium as a result of cellular uptake can be negligible over a wide range of cell densities. In contrast, most flow cytometric studies have not verified this condition but rather assumed it from the start. Consequently, the initially prepared extracellular dye concentration has usually been used for the calculation of the Nernst potential of the dye. In this study, however, external dye depletion could be observed in both large IGR-1 and small LCL-HO cells under experimental conditions, which have often been applied routinely in spectrofluorimetry and flow cytometry. The maximum cell density at which dye depletion could be virtually avoided was dependent on cell size and membrane potential and definitely needed to be taken into account to ensure reliable results. In addition, accepted calibration procedures based on the partition of sodium and potassium (Goldman-Hodgkin-Katz equation) or potassium alone (Nernst equation) were performed by flow cytometry on cell suspensions with an appropriately low cell density. The observed extensive lack of concordance between the correspondingly calculated membrane potential and the equilibrium potential of DiBAC4 (3) revealed that these methods require the additional measurement of cation parameters (membrane permeability and/or intracellular concentration). In contrast, due to the linear relation between fluorescence and low DiBAC4 (3) concentrations, the Nernst potential of the dye for totally depolarized cells can be reliably used for calibration with an essentially lower effort and expense. Copyright © 2013 International Society for Advancement of Cytometry.
Four in vivo g-ratio-weighted imaging methods: Comparability and repeatability at the group level.
Ellerbrock, Isabel; Mohammadi, Siawoosh
2018-01-01
A recent method, denoted in vivo g-ratio-weighted imaging, has related the microscopic g-ratio, only accessible by ex vivo histology, to noninvasive MRI markers for the fiber volume fraction (FVF) and myelin volume fraction (MVF). Different MRI markers have been proposed for g-ratio weighted imaging, leaving open the question which combination of imaging markers is optimal. To address this question, the repeatability and comparability of four g-ratio methods based on different combinations of, respectively, two imaging markers for FVF (tract-fiber density, TFD, and neurite orientation dispersion and density imaging, NODDI) and two imaging markers for MVF (magnetization transfer saturation rate, MT, and, from proton density maps, macromolecular tissue volume, MTV) were tested in a scan-rescan experiment in two groups. Moreover, it was tested how the repeatability and comparability were affected by two key processing steps, namely the masking of unreliable voxels (e.g., due to partial volume effects) at the group level and the calibration value used to link MRI markers to MVF (and FVF). Our data showed that repeatability and comparability depend largely on the marker for the FVF (NODDI outperformed TFD), and that they were improved by masking. Overall, the g-ratio method based on NODDI and MT showed the highest repeatability (90%) and lowest variability between groups (3.5%). Finally, our results indicate that the calibration procedure is crucial, for example, calibration to a lower g-ratio value (g = 0.6) than the commonly used one (g = 0.7) can change not only repeatability and comparability but also the reported dependency on the FVF imaging marker. Hum Brain Mapp 39:24-41, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Dental examiners consistency in applying the ICDAS criteria for a caries prevention community trial.
Nelson, S; Eggertsson, H; Powell, B; Mandelaris, J; Ntragatakis, M; Richardson, T; Ferretti, G
2011-09-01
To examine dental examiners' one-year consistency in utilizing the International Caries Detection and Assessment System (ICDAS) criteria after baseline training and calibration. A total of three examiners received baseline training/calibration by a "gold standard" examiner, and one year later re-calibration was conducted. For the baseline training/calibration, subjects aged 8-16 years, and for the re-calibration subjects aged five to six years were recruited for the study. The ICDAS criteria were used to classify visual caries lesion severity (0-6 scale), lesion activity (active/inactive), and presence of filling material (0-9 scale) of all available tooth surfaces of permanent and primary teeth. The examination used a clinical light, mirror and air syringe. Kappa (weighted: Wkappa, unweighted: Kappa) statistics were used to determine inter-and intra-examiner reliability at baseline and re-calibration. For lesion severity and filling criteria, the baseline calibration on 35 subjects indicated an inter-rater Wkappa ranging from 0.69-0.92 and intra-rater Wkappa ranging from 0.81-0.92. Re-calibration on 22 subjects indicated an inter-rater Wkappa of 0.77-0.98 and intra-rater Wkappa ranged from 0.93-1.00. The Wkappa for filling was consistently in the excellent range, while lesion severity was in the good to excellent range. Activity kappa was in the poor to good range. All examiners improved with time. The baseline training/calibration in ICDAS was crucial to maintain the stability of the examiners reliability over a one year period. The ICDAS can be an effective assessment tool for community-based clinical trials.
Carbon-14 wiggle-match dating of peat deposits: advantages and limitations
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; van Geel, Bas; Mauquoy, Dmitri; van der Plicht, Johannes
2004-02-01
Carbon-14 wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a series of closely spaced peat 14C dates with the 14C calibration curve. The method of WMD is discussed, and its advantages and limitations are compared with calibration of individual dates. A numerical approach to WMD is introduced that makes it possible to assess the precision of WMD chronologies. During several intervals of the Holocene, the 14C calibration curve shows less pronounced fluctuations. We assess whether wiggle-matching is also a feasible strategy for these parts of the 14C calibration curve. High-precision chronologies, such as obtainable with WMD, are needed for studies of rapid climate changes and their possible causes during the Holocene. Copyright
Hutchinson, Kasey J.; Christiansen, Daniel E.
2013-01-01
The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, used the Soil and Water Assessment Tool to simulate streamflow and nitrate loads within the Cedar River Basin, Iowa. The goal was to assess the ability of the Soil and Water Assessment Tool to estimate streamflow and nitrate loads in gaged and ungaged basins in Iowa. The Cedar River Basin model uses measured streamflow data from 12 U.S. Geological Survey streamflow-gaging stations for hydrology calibration. The U.S. Geological Survey software program, Load Estimator, was used to estimate annual and monthly nitrate loads based on measured nitrate concentrations and streamflow data from three Iowa Department of Natural Resources Storage and Retrieval/Water Quality Exchange stations, located throughout the basin, for nitrate load calibration. The hydrology of the model was calibrated for the period of January 1, 2000, to December 31, 2004, and validated for the period of January 1, 2005, to December 31, 2010. Simulated daily, monthly, and annual streamflow resulted in Nash-Sutcliffe coefficient of model efficiency (ENS) values ranging from 0.44 to 0.83, 0.72 to 0.93, and 0.56 to 0.97, respectively, and coefficient of determination (R2) values ranging from 0.55 to 0.87, 0.74 to 0.94, and 0.65 to 0.99, respectively, for the calibration period. The percent bias ranged from -19 to 10, -16 to 10, and -19 to 10 for daily, monthly, and annual simulation, respectively. The validation period resulted in daily, monthly, and annual ENS values ranging from 0.49 to 0.77, 0.69 to 0.91, and -0.22 to 0.95, respectively; R2 values ranging from 0.59 to 0.84, 0.74 to 0.92, and 0.36 to 0.92, respectively; and percent bias ranging from -16 for all time steps to percent bias of 14, 15, and 15, respectively. The nitrate calibration was based on a small subset of the locations used in the hydrology calibration with limited measured data. Model performance ranges from unsatisfactory to very good for the calibration period (January 1, 2000, to December 31, 2004). Results for the validation period (January 1, 2005, to December 31, 2010) indicate a need for an increase of measured data as well as more refined documented management practices at a higher resolution. Simulated nitrate loads resulted in monthly and annual ENS values ranging from 0.28 to 0.82 and 0.61 to 0.86, respectively, and monthly and annual R2 values ranging from 0.65 to 0.81 and 0.65 to 0.88, respectively, for the calibration period. The monthly and annual calibration percent bias ranged from 4 to 7 and 5 to 7, respectively. The validation period resulted in all but two ENS values less than zero. Monthly and annual validation R2 values ranged from 0.5 to 0.67 and 0.25 to 0.48, respectively. Monthly and annual validation percent bias ranged from 46 to 68 for both time steps. A daily calibration and validation for nitrate loads was not performed because of the poor monthly and annual results; measured daily nitrate data are available for intervals of time in 2009 and 2010 during which a successful monthly and annual calibration could not be achieved. The Cedar River Basin is densely gaged relative to other basins in Iowa; therefore, an alternative hydrology scenario was created to assess the predictive capabilities of the Soil and Water Assessment Tool using fewer locations of measured data for model hydrology calibration. Although the ability of the model to reproduce measured values improves with the number of calibration locations, results indicate that the Soil and Water Assessment Tool can be used to adequately estimate streamflow in less densely gaged basins throughout the State, especially at the monthly time step. However, results also indicate that caution should be used when calibrating a subbasin that consists of physically distinct regions based on only one streamflow-gaging station.
Klop, Corinne; de Vries, Frank; Bijlsma, Johannes W J; Leufkens, Hubert G M; Welsing, Paco M J
2016-12-01
FRAX incorporates rheumatoid arthritis (RA) as a dichotomous predictor for predicting the 10-year risk of hip and major osteoporotic fracture (MOF). However, fracture risk may deviate with disease severity, duration or treatment. Aims were to validate, and if needed to update, UK FRAX for patients with RA and to compare predictive performance with the general population (GP). Cohort study within UK Clinical Practice Research Datalink (CPRD) (RA: n=11 582, GP: n=38 755), also linked to hospital admissions for hip fracture (CPRD-Hospital Episode Statistics, HES) (RA: n=7221, GP: n=24 227). Predictive performance of UK FRAX without bone mineral density was assessed by discrimination and calibration. Updating methods included recalibration and extension. Differences in predictive performance were assessed by the C-statistic and Net Reclassification Improvement (NRI) using the UK National Osteoporosis Guideline Group intervention thresholds. UK FRAX significantly overestimated fracture risk in patients with RA, both for MOF (mean predicted vs observed 10-year risk: 13.3% vs 8.4%) and hip fracture (CPRD: 5.5% vs 3.1%, CPRD-HES: 5.5% vs 4.1%). Calibration was good for hip fracture in the GP (CPRD-HES: 2.7% vs 2.4%). Discrimination was good for hip fracture (RA: 0.78, GP: 0.83) and moderate for MOF (RA: 0.69, GP: 0.71). Extension of the recalibrated UK FRAX using CPRD-HES with duration of RA disease, glucocorticoids (>7.5 mg/day) and secondary osteoporosis did not improve the NRI (0.01, 95% CI -0.04 to 0.05) or C-statistic (0.78). UK FRAX overestimated fracture risk in RA, but performed well for hip fracture in the GP after linkage to hospitalisations. Extension of the recalibrated UK FRAX did not improve predictive performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Petya K. Entcheva Campbell; Elizabeth M. Middleton; Kurt J. Thome; Raymond F. Kokaly; Karl Fred Huemmrich; David Lagomasino; Kimberly A. Novick; Nathaniel A. Brunsell
2013-01-01
This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends...
The GOCE end-to-end system simulator
NASA Astrophysics Data System (ADS)
Catastini, G.; Cesare, S.; de Sanctis, S.; Detoma, E.; Dumontel, M.; Floberghagen, R.; Parisch, M.; Sechi, G.; Anselmi, A.
2003-04-01
The idea of an end-to-end simulator was conceived in the early stages of the GOCE programme, as an essential tool for assessing the satellite system performance, that cannot be fully tested on the ground. The simulator in its present form is under development at Alenia Spazio for ESA since the beginning of Phase B and is being used for checking the consistency of the spacecraft and of the payload specifications with the overall system requirements, supporting trade-off, sensitivity and worst-case analyses, and preparing and testing the on-ground and in-flight calibration concepts. The software simulates the GOCE flight along an orbit resulting from the application of Earth's gravity field, non-conservative environmental disturbances (atmospheric drag, coupling with Earth's magnetic field, etc.) and control forces/torques. The drag free control forces as well as the attitude control torques are generated by the current design of the dedicated algorithms. Realistic sensor models (star tracker, GPS receiver and gravity gradiometer) feed the control algorithms and the commanded forces are applied through realistic thruster models. The output of this stage of the simulator is a time series of Level-0 data, namely the gradiometer raw measurements and spacecraft ancillary data. The next stage of the simulator transforms Level-0 data into Level-1b (gravity gradient tensor) data, by implementing the following steps: - transformation of raw measurements of each pair of accelerometers into common and differential accelerations - calibration of the common and differential accelerations - application of the post-facto algorithm to rectify the phase of the accelerations and to estimate the GOCE angular velocity and attitude - computation of the Level-1b gravity gradient tensor from calibrated accelerations and estimated angular velocity in different reference frames (orbital, inertial, earth-fixed); computation of the spectral density of the error of the tensor diagonal components (measured gravity gradient minus input gravity gradient) in order to verify the requirement on the error of gravity gradient of 4 mE/sqrt(Hz) within the gradiometer measurement bandwidth (5 to 100 mHz); computation of the spectral density of the tensor trace in order to verify the requirement of 4 sqrt(3) mE/sqrt(Hz) within the measurement bandwidth - processing of GPS observations for orbit reconstruction within the required 10m accuracy and for gradiometer measurement geolocation. The current version of the end-to-end simulator, essentially focusing on the gradiometer payload, is undergoing detailed testing based on a time span of 10 days of simulated flight. This testing phase, ending in January 2003, will verify the current implementation and conclude the assessment of numerical stability and precision. Following that, the exercise will be repeated on a longer-duration simulated flight and the lesson learnt so far will be exploited to further improve the simulator's fidelity. The paper will describe the simulator's current status and will illustrate its capabilities for supporting the assessment of the quality of the scientific products resulting from the current spacecraft and payload design.
NASA Astrophysics Data System (ADS)
Hancock, G. R.; Webb, A. A.; Turner, L.
2017-11-01
Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.
Vessel calibre—a potential MRI biomarker of tumour response in clinical trials
Emblem, Kyrre E.; Farrar, Christian T.; Gerstner, Elizabeth R.; Batchelor, Tracy T.; Borra, Ronald J. H.; Rosen, Bruce R.; Sorensen, A. Gregory; Jain, Rakesh K.
2015-01-01
Our understanding of the importance of blood vessels and angiogenesis in cancer has increased considerably over the past decades, and the assessment of tumour vessel calibre and structure has become increasingly important for in vivo monitoring of therapeutic response. The preferred method for in vivo imaging of most solid cancers is MRI, and the concept of vessel-calibre MRI has evolved since its initial inception in the early 1990s. Almost a quarter of a century later, unlike traditional contrast-enhanced MRI techniques, vessel-calibre MRI remains widely inaccessible to the general clinical community. The narrow availability of the technique is, in part, attributable to limited awareness and a lack of imaging standardization. Thus, the role of vessel-calibre MRI in early phase clinical trials remains to be determined. By contrast, regulatory approvals of antiangiogenic agents that are not directly cytotoxic have created an urgent need for clinical trials incorporating advanced imaging analyses, going beyond traditional assessments of tumour volume. To this end, we review the field of vessel-calibre MRI and summarize the emerging evidence supporting the use of this technique to monitor response to anticancer therapy. We also discuss the potential use of this biomarker assessment in clinical imaging trials and highlight relevant avenues for future research. PMID:25113840
NASA Astrophysics Data System (ADS)
Xiao, Han; Wang, Dingbao; Hagen, Scott C.; Medeiros, Stephen C.; Hall, Carlton R.
2016-11-01
A three-dimensional variable-density groundwater flow and salinity transport model is implemented using the SEAWAT code to quantify the spatial variation of water-table depth and salinity of the surficial aquifer in Merritt Island and Cape Canaveral Island in east-central Florida (USA) under steady-state 2010 hydrologic and hydrogeologic conditions. The developed model is referred to as the `reference' model and calibrated against field-measured groundwater levels and a map of land use and land cover. Then, five prediction/projection models are developed based on modification of the boundary conditions of the calibrated `reference' model to quantify climate change impacts under various scenarios of sea-level rise and precipitation change projected to 2050. Model results indicate that west Merritt Island will encounter lowland inundation and saltwater intrusion due to its low elevation and flat topography, while climate change impacts on Cape Canaveral Island and east Merritt Island are not significant. The SEAWAT models developed for this study are useful and effective tools for water resources management, land use planning, and climate-change adaptation decision-making in these and other low-lying coastal alluvial plains and barrier island systems.
NOTE: Dose area product evaluations with Gafchromic® XR-R films and a flat-bed scanner
NASA Astrophysics Data System (ADS)
Rampado, O.; Garelli, E.; Deagostini, S.; Ropolo, R.
2006-12-01
Gafchromic® XR-R films are a useful tool to evaluate entrance skin dose in interventional radiology. Another dosimetric quantity of interest in diagnostic and interventional radiology is the dose area product (DAP). In this study, a method to evaluate DAP using Gafchromic® XR-R films and a flat-bed scanner was developed and tested. Film samples were exposed to an x-ray beam of 80 kVp over a dose range of 0 10 Gy. DAP measurements with films were obtained from the digitalization of a film sample positioned over the x-ray beam window during the exposure. DAP values obtained with this method were compared for 23 cardiological interventional procedures with DAP values displayed by the equipment. The overall one-sigma dose measurement uncertainty depended on the absorbed dose, with values below 6% for doses above 1 Gy. A maximum discrepancy of 16% was found, which is of the order of the differences in the DAP measurements that may occur with different calibration procedures. Based on the results presented, after an accurate calibration procedure and a thorough inspection of the relationship between the actual dose and the direct measured quantity (net optical density or net pixel value variation), Gafchromic® XR-R films can be used to assess the DAP.
Unifying distance-based goodness-of-fit indicators for hydrologic model assessment
NASA Astrophysics Data System (ADS)
Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim
2014-05-01
The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.
New methods of data calibration for high power-aperture lidar.
Guan, Sai; Yang, Guotao; Chang, Qihai; Cheng, Xuewu; Yang, Yong; Gong, Shaohua; Wang, Jihong
2013-03-25
For high power-aperture lidar sounding of wide atmospheric dynamic ranges, as in middle-upper atmospheric probing, photomultiplier tubes' (PMT) pulse pile-up effects and signal-induced noise (SIN) complicates the extraction of information from lidar return signal, especially from metal layers' fluorescence signal. Pursuit for sophisticated description of metal layers' characteristics at far range (80~130km) with one PMT of high quantum efficiency (QE) and good SNR, contradicts the requirements for signals of wide linear dynamic range (i.e. from approximate 10(2) to 10(8) counts/s). In this article, Substantial improvements on experimental simulation of Lidar signals affected by PMT are reported to evaluate the PMTs' distortions in our High Power-Aperture Sodium LIDAR system. A new method for pile-up calibration is proposed by taking into account PMT and High Speed Data Acquisition Card as an Integrated Black-Box, as well as a new experimental method for identifying and removing SIN from the raw Lidar signals. Contradiction between the limited linear dynamic range of raw signal (55~80km) and requirements for wider acceptable linearity has been effectively solved, without complicating the current lidar system. Validity of these methods was demonstrated by applying calibrated data to retrieve atmospheric parameters (i.e. atmospheric density, temperature and sodium absolutely number density), in comparison with measurements of TIMED satellite and atmosphere model. Good agreements are obtained between results derived from calibrated signal and reference measurements where differences of atmosphere density, temperature are less than 5% in the stratosphere and less than 10K from 30km to mesosphere, respectively. Additionally, approximate 30% changes are shown in sodium concentration at its peak value. By means of the proposed methods to revert the true signal independent of detectors, authors approach a new balance between maintaining the linearity of adequate signal (20-110km) and guaranteeing good SNR (i.e. 10(4):1 around 90km) without debasing QE, in one single detecting channel. For the first time, PMT in photon-counting mode is independently applied to subtract reliable information of atmospheric parameters with wide acceptable linearity over an altitude range from stratosphere up to lower thermosphere (20-110km).
Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.
2010-01-01
Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.
NASA Astrophysics Data System (ADS)
Chiarucci, Simone; Wijnholds, Stefan J.
2018-02-01
Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.
NASA Astrophysics Data System (ADS)
Müller Schmied, Hannes; Döll, Petra
2017-04-01
The estimation of the World's water resources has a long tradition and numerous methods for quantification exists. The resulting numbers vary significantly, leaving room for improvement. Since some decades, global hydrological models (GHMs) are being used for large scale water budget assessments. GHMs are designed to represent the macro-scale hydrological processes and many of those models include human water management, e.g. irrigation or reservoir operation, making them currently the first choice for global scale assessments of the terrestrial water balance within the Anthropocene. The Water - Global Assessment and Prognosis (WaterGAP) is a model framework that comprises both the natural and human water dimension and is in development and application since the 1990s. In recent years, efforts were made to assess the sensitivity of water balance components to alternative climate forcing input data and, e.g., how this sensitivity is affected by WaterGAP's calibration scheme. This presentation shows the current best estimate of terrestrial water balance components as simulated with WaterGAP by 1) assessing global and continental water balance components for the climate period 1971-2000 and the IPCC reference period 1986-2005 for the most current WaterGAP version using a homogenized climate forcing data, 2) investigating variations of water balance components for a number of state-of-the-art climate forcing data and 3) discussing the benefit of the calibration approach for a better observation-data constrained global water budget. For the most current WaterGAP version 2.2b and a homogenized combination of the two WATCH Forcing Datasets, global scale (excluding Antarctica and Greenland) river discharge into oceans and inland sinks (Q) is assessed to be 40 000 km3 yr-1 for 1971-2000 and 39 200 km3 yr-1 for 1986-2005. Actual evapotranspiration (AET) is close to each other with around 70 600 (70 700) km3 yr-1 as well as water consumption with 1000 (1100) km3 yr-1. The main reason for differing Q is varying precipitation (P, 111 600 km3 yr-1 vs. 110 900 km3 yr-1). The sensitivity of water balance components to alternative climate forcing data is high. Applying 5 state-of-the-art climate forcing data sets, long term average P differs globally by 8000 km3 yr-1, mainly due to different handling of precipitation undercatch correction (or neglecting it). AET differs by 5500 km3 yr-1 whereas Q varies by 3000 km3 yr-1. The sensitivity of human water consumption to alternative climate input data is only about 5%. WaterGAP's calibration approach forces simulated long-term river discharge to be approximately equal to observed values at 1319 gauging stations during the time period selected for calibration. This scheme greatly reduces the impact of uncertain climate input on simulated Q data in these upstream drainage basins (as well as downstream). In calibration areas, the Q variation among the climate input data is much lower (1.6%) than in non-calibrated areas (18.5%). However, variation of Q at the grid cell-level is still high (an average of 37% for Q in grid cells in calibration areas vs. 74% outside). Due to the closed water balance, variation of AET is higher in calibrated areas than in non-calibrated areas. Main challenges in assessing the world's water resources by GHMs like WaterGAP are 1) the need of consistent long-term climate forcing input data sets, especial considering a suitable handling of P undercatch, 2) the accessibility of in-situ data for river discharge or alternative calibration data for currently non-calibrated areas, and 3) an improved simulation in semi-arid and arid river basins. As an outlook, a multi-model, multi-forcing study of global water balance components within the frame of the Inter-Sectoral Impact Model Intercomparison Project is proposed.
Solid laboratory calibration of a nonimaging spectroradiometer.
Schaepman, M E; Dangel, S
2000-07-20
Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a =10% uncertainty calibration. The individual calibration steps include characterization of the signal-to-noise ratio, the noise equivalent signal, the dark current, the wavelength calibration, the spectral sampling interval, the nonlinearity, directional and positional effects, the spectral scattering, the field of view, the polarization, the size-of-source effects, and the temperature dependence of a particular instrument. The traceability of the radiance calibration is established to a secondary National Institute of Standards and Technology calibration standard by use of a 95% confidence interval and results in an uncertainty of less than ?7.1% for all spectroradiometer bands.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Recent improvements of the JET lithium beam diagnostica)
NASA Astrophysics Data System (ADS)
Brix, M.; Dodt, D.; Dunai, D.; Lupelli, I.; Marsen, S.; Melson, T. F.; Meszaros, B.; Morgan, P.; Petravich, G.; Refy, D. I.; Silva, C.; Stamp, M.; Szabolics, T.; Zastrow, K.-D.; Zoletnik, S.; JET-EFDA Contributors
2012-10-01
A 60 kV neutral lithium diagnostic beam probes the edge plasma of JET for the measurement of electron density profiles. This paper describes recent enhancements of the diagnostic setup, new procedures for calibration and protection measures for the lithium ion gun during massive gas puffs for disruption mitigation. New light splitting optics allow in parallel beam emission measurements with a new double entrance slit CCD spectrometer (spectrally resolved) and a new interference filter avalanche photodiode camera (fast density and fluctuation studies).
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
Measurement of the relative afferent pupillary defect in retinal detachment.
Bovino, J A; Burton, T C
1980-07-01
A swinging flashlight test and calibrated neutral density filters were used to quantitate the depth of relative afferent pupillary defects in ten patients with retinal detachment. Postoperatively, the pupillary responses returned to normal in seven of nine patients with anatomically successful surgery.
Predicting the dynamic fracture of steel via a non-local strain-energy density failure criterion.
DOT National Transportation Integrated Search
2014-06-01
Predicting the onset of fracture in a material subjected to dynamic loading conditions has typically been heavily mesh-dependent, and often must be specifically calibrated for each geometric design. This can lead to costly models and even : costlier ...
Indirect Field Measurement of Wine-Grape Vineyard Canopy Leaf Area Index
NASA Technical Reports Server (NTRS)
Johnson, Lee F.; Pierce, Lars L.; Skiles, J. W. (Technical Monitor)
2002-01-01
Leaf area index (LAI) indirect measurements were made at 12 study plots in California's Napa Valley commercial wine-grape vineyards with a LI-COR LI-2000 Plant Canopy Analyzer (PCA). The plots encompassed different trellis systems, biological varieties, and planting densities. LAI ranged from 0.5 - 2.25 sq m leaf area/ sq m ground area according to direct (defoliation) measurements. Indirect LAI reported by the PCA was significantly related to direct LAI (r(exp 2) = 0.78, p less than 001). However, the PCA tended to underestimate direct LAI by about a factor of two. Narrowing the instrument's conical field of view from 148 deg to 56 deg served to increase readings by approximately 30%. The PCA offers a convenient way to discern relative differences in vineyard canopy density. Calibration by direct measurement (defoliation) is recommended in cases where absolute LAI is desired. Calibration equations provided herein may be inverted to retrieve actual vineyard LAI from PCA readings.
Kowalski, M P; Barbee, T W; Heidemann, K F; Gursky, H; Rife, J C; Hunter, W R; Fritz, G G; Cruddace, R G
1999-11-01
We have fabricated the four flight gratings for a sounding rocket high-resolution spectrometer using a holographic ion-etching technique. The gratings are spherical (4000-mm radius of curvature), large (160 mm x 90 mm), and have a laminar groove profile of high density (3600 grooves/mm). They have been coated with a high-reflectance multilayer of Mo/Si. Using an atomic force microscope, we examined the surface characteristics of the first grating before and after multilayer coating. The average roughness is approximately 3 A rms after coating. Using synchrotron radiation, we completed an efficiency calibration map over the wavelength range 225-245 A. At an angle of incidence of 5 degrees and a wavelength of 234 A, the average efficiency in the first inside order is 10.4 +/- 0.5%, and the derived groove efficiency is 34.8 +/- 1.6%. These values exceed all previously published results for a high-density grating.
NASA Astrophysics Data System (ADS)
Becerra, Luis Omar; Lorefice, Salvatore
2009-01-01
Hydrometers are instruments usually made of glass which are widely used for different levels of precision to measure liquid density and related quantities to control different products and processes. This bilateral comparison on the calibration of hydrometers shows that results reported by CENAM-Mexico and INRIM-Italy are consistent within the claimed uncertainty in the range of 800 kg/m3 to 1200 kg/m3. This bilateral comparison is intended to link the two regional comparisons SIM.M.D-K4 and EURAMET.D-K4. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by SIM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Use of Vertically Integrated Ice in WRF-Based Forecasts of Lightning Threat
NASA Technical Reports Server (NTRS)
McCaul, E. W., jr.; Goodman, S. J.
2008-01-01
Previously reported methods of forecasting lightning threat using fields of graupel flux from WRF simulations are extended to include the simulated field of vertically integrated ice within storms. Although the ice integral shows less temporal variability than graupel flux, it provides more areal coverage, and can thus be used to create a lightning forecast that better matches the areal coverage of the lightning threat found in observations of flash extent density. A blended lightning forecast threat can be constructed that retains much of the desirable temporal sensitivity of the graupel flux method, while also incorporating the coverage benefits of the ice integral method. The graupel flux and ice integral fields contributing to the blended forecast are calibrated against observed lightning flash origin density data, based on Lightning Mapping Array observations from a series of case studies chosen to cover a wide range of flash rate conditions. Linear curve fits that pass through the origin are found to be statistically robust for the calibration procedures.
NASA Astrophysics Data System (ADS)
Venkata, Santhosh Krishnan; Roy, Binoy Krishna
2016-03-01
Design of an intelligent flow measurement technique using venturi flow meter is reported in this paper. The objectives of the present work are: (1) to extend the linearity range of measurement to 100 % of full scale input range, (2) to make the measurement technique adaptive to variations in discharge coefficient, diameter ratio of venturi nozzle and pipe (β), liquid density, and liquid temperature, and (3) to achieve the objectives (1) and (2) using an optimized neural network. The output of venturi flow meter is differential pressure. It is converted to voltage by using a suitable data conversion unit. A suitable optimized artificial neural network (ANN) is added, in place of conventional calibration circuit. ANN is trained, tested with simulated data considering variations in discharge coefficient, diameter ratio between venturi nozzle and pipe, liquid density, and liquid temperature. The proposed technique is then subjected to practical data for validation. Results show that the proposed technique has fulfilled the objectives.
Data-driven sensitivity inference for Thomson scattering electron density measurement systems.
Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro
2017-01-01
We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.
New Thomson scattering diagnostic on RFX-mod.
Alfier, A; Pasqualotto, R
2007-01-01
This article describes the completely renovated Thomson scattering (TS) diagnostic employed in the modified Reversed Field eXperiment (RFX-mod) since it restarted operation in 2005. The system measures plasma electron temperature and density profiles along an equatorial diameter, measuring in 84 positions with 7 mm spatial resolution. The custom built Nd:YLF laser produces a burst of 10 pulses at 50 Hz with energy of 3 J, providing ten profile measurements in a plasma discharge of about 300 ms duration. An optical delay system accommodates three scattering volumes in each of the 28 interference filter spectrometers. Avalanche photodiodes detect the Thomson scattering signals and allow them to be recorded by means of waveform digitizers. Electron temperature is obtained using an alternative relative calibration method, based on the use of a supercontinuum light source. Rotational Raman scattering in nitrogen has supplied the absolute calibration for the electron density measurements. During RFX-mod experimental campaigns in 2005, the TS diagnostic has demonstrated its performance, routinely providing reliable high resolution profiles.
Nizzetto, Luca; Bussi, Gianbattista; Futter, Martyn N; Butterfield, Dan; Whitehead, Paul G
2016-08-10
The presence of microplastics (MPs) in the environment is a problem of growing concern. While research has focused on MP occurrence and impacts in the marine environment, very little is known about their release on land, storage in soils and sediments and transport by run-off and rivers. This study describes a first theoretical assessment of these processes. A mathematical model of catchment hydrology, soil erosion and sediment budgets was upgraded to enable description of MP fate. The Thames River in the UK was used as a case study. A general lack of data on MP emissions to soils and rivers and the mass of MPs in agricultural soils, limits the present work to serve as a purely theoretical, nevertheless rigorous, assessment that can be used to guide future monitoring and impact evaluations. The fundamental assumption on which modelling is based is that the same physical controls on soil erosion and natural sediment transport (for which model calibration and validation are possible), also control MP transport and storage. Depending on sub-catchment soil characteristics and precipitation patterns, approximately 16-38% of the heavier-than-water MPs hypothetically added to soils (e.g. through routine applications of sewage sludge) are predicted to be stored locally. In the stream, MPs < 0.2 mm are generally not retained, regardless of their density. Larger MPs with densities marginally higher than water can instead be retained in the sediment. It is, however, anticipated that high flow periods can remobilize this pool. Sediments of river sections experiencing low stream power are likely hotspots for deposition of MPs. Exposure and impact assessments should prioritize these environments.
Sarshar, Mohammad; Wong, Winson T.; Anvari, Bahman
2014-01-01
Abstract. Optical tweezers have become an important instrument in force measurements associated with various physical, biological, and biophysical phenomena. Quantitative use of optical tweezers relies on accurate calibration of the stiffness of the optical trap. Using the same optical tweezers platform operating at 1064 nm and beads with two different diameters, we present a comparative study of viscous drag force, equipartition theorem, Boltzmann statistics, and power spectral density (PSD) as methods in calibrating the stiffness of a single beam gradient force optical trap at trapping laser powers in the range of 0.05 to 1.38 W at the focal plane. The equipartition theorem and Boltzmann statistic methods demonstrate a linear stiffness with trapping laser powers up to 355 mW, when used in conjunction with video position sensing means. The PSD of a trapped particle’s Brownian motion or measurements of the particle displacement against known viscous drag forces can be reliably used for stiffness calibration of an optical trap over a greater range of trapping laser powers. Viscous drag stiffness calibration method produces results relevant to applications where trapped particle undergoes large displacements, and at a given position sensing resolution, can be used for stiffness calibration at higher trapping laser powers than the PSD method. PMID:25375348
Electron-density-sensitive Line Ratios of Fe XIII– XVI from Laboratory Sources Compared to CHIANTI
NASA Astrophysics Data System (ADS)
Weller, M. E.; Beiersdorfer, P.; Soukhanovskii, V. A.; Scotti, F.; LeBlanc, B. P.
2018-02-01
We present electron-density-sensitive line ratios for Fe XIII– XVI measured in the spectral wavelength range of 200–440 Å and an electron density range of (1–4) × 1013 cm‑3. The results provide a test at the high-density limit of density-sensitive line ratios useful for astrophysical studies. The measurements were performed on the National Spherical Torus Experiment-Upgrade, where electron densities were measured independently by the laser Thomson scattering diagnostic. Spectra were collected with a flat-field grazing-incidence spectrometer, which provided a spectral resolution of up to 0.3 Å, i.e., high resolution across the broad wavelength range. The response of the instrument was relatively calibrated using spectroscopic techniques in order to improve accuracy. The line ratios are compared to other laboratory sources and the latest version of CHIANTI (8.0.2), and an agreement within 30% is found.
Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin
2016-08-01
Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.
LIDAR TS for ITER core plasma. Part III: calibration and higher edge resolution
NASA Astrophysics Data System (ADS)
Nielsen, P.; Gowers, C.; Salzmann, H.
2017-12-01
Calibration, after initial installation, of the proposed two wavelength LIDAR Thomson Scattering System requires no access to the front end and does not require a foreign gas fill for Raman scattering. As already described, the variation of solid angle of collection with scattering position is a simple geometrical variation over the unvignetted region. The additional loss over the vignetted region can easily be estimated and in the case of a small beam dump located between the Be tiles, it is within the specified accuracy of the density. The only additional calibration is the absolute spectral transmission of the front-end optics. Over time we expect the transmission of the two front-end mirrors to suffer a deterioration mainly due to depositions. The reduction in transmission is likely to be worse towards the blue end of the scattering spectrum. It is therefore necessary to have a method to monitor such changes and to determine its spectral variation. Standard methods use two lasers at different wavelength with a small time separation. Using the two-wavelength approach, a method has been developed to determine the relative spectral variation of the transmission loss, using simply the measured signals in plasmas with peak temperatures of 4-6 keV . Comparing the calculated line integral of the fitted density over the full chord to the corresponding interferometer data we also have an absolute calibration. At the outer plasma boundary, the standard resolution of the LIDAR Thomson Scattering System is not sufficient to determine the edge gradient in an H-mode plasma. However, because of the step like nature of the signal here, it is possible to carry out a deconvolution of the scattered signals, thereby achieving an effective resolution of ~ 1-2 cm in the outer 10-20 cm.
Śliwińska, Magdalena; Garcia-Hernandez, Celia; Kościński, Mikołaj; Dymerski, Tomasz; Wardencki, Waldemar; Namieśnik, Jacek; Śliwińska-Bartkowiak, Małgorzata; Jurga, Stefan; Garcia-Cabezon, Cristina; Rodriguez-Mendez, Maria Luz
2016-01-01
The capability of a phthalocyanine-based voltammetric electronic tongue to analyze strong alcoholic beverages has been evaluated and compared with the performance of spectroscopic techniques coupled to chemometrics. Nalewka Polish liqueurs prepared from five apple varieties have been used as a model of strong liqueurs. Principal Component Analysis has demonstrated that the best discrimination between liqueurs prepared from different apple varieties is achieved using the e-tongue and UV-Vis spectroscopy. Raman spectra coupled to chemometrics have not been efficient in discriminating liqueurs. The calculated Euclidean distances and the k-Nearest Neighbors algorithm (kNN) confirmed these results. The main advantage of the e-tongue is that, using PLS-1, good correlations have been found simultaneously with the phenolic content measured by the Folin–Ciocalteu method (R2 of 0.97 in calibration and R2 of 0.93 in validation) and also with the density, a marker of the alcoholic content method (R2 of 0.93 in calibration and R2 of 0.88 in validation). UV-Vis coupled with chemometrics has shown good correlations only with the phenolic content (R2 of 0.99 in calibration and R2 of 0.99 in validation) but correlations with the alcoholic content were low. Raman coupled with chemometrics has shown good correlations only with density (R2 of 0.96 in calibration and R2 of 0.85 in validation). In summary, from the three holistic methods evaluated to analyze strong alcoholic liqueurs, the voltammetric electronic tongue using phthalocyanines as sensing elements is superior to Raman or UV-Vis techniques because it shows an excellent discrimination capability and remarkable correlations with both antioxidant capacity and alcoholic content—the most important parameters to be measured in this type of liqueurs. PMID:27735832
Śliwińska, Magdalena; Garcia-Hernandez, Celia; Kościński, Mikołaj; Dymerski, Tomasz; Wardencki, Waldemar; Namieśnik, Jacek; Śliwińska-Bartkowiak, Małgorzata; Jurga, Stefan; Garcia-Cabezon, Cristina; Rodriguez-Mendez, Maria Luz
2016-10-09
The capability of a phthalocyanine-based voltammetric electronic tongue to analyze strong alcoholic beverages has been evaluated and compared with the performance of spectroscopic techniques coupled to chemometrics. Nalewka Polish liqueurs prepared from five apple varieties have been used as a model of strong liqueurs. Principal Component Analysis has demonstrated that the best discrimination between liqueurs prepared from different apple varieties is achieved using the e-tongue and UV-Vis spectroscopy. Raman spectra coupled to chemometrics have not been efficient in discriminating liqueurs. The calculated Euclidean distances and the k-Nearest Neighbors algorithm (kNN) confirmed these results. The main advantage of the e-tongue is that, using PLS-1, good correlations have been found simultaneously with the phenolic content measured by the Folin-Ciocalteu method (R² of 0.97 in calibration and R² of 0.93 in validation) and also with the density, a marker of the alcoholic content method (R² of 0.93 in calibration and R² of 0.88 in validation). UV-Vis coupled with chemometrics has shown good correlations only with the phenolic content (R² of 0.99 in calibration and R² of 0.99 in validation) but correlations with the alcoholic content were low. Raman coupled with chemometrics has shown good correlations only with density (R² of 0.96 in calibration and R² of 0.85 in validation). In summary, from the three holistic methods evaluated to analyze strong alcoholic liqueurs, the voltammetric electronic tongue using phthalocyanines as sensing elements is superior to Raman or UV-Vis techniques because it shows an excellent discrimination capability and remarkable correlations with both antioxidant capacity and alcoholic content-the most important parameters to be measured in this type of liqueurs.
Virtual environment assessment for laser-based vision surface profiling
NASA Astrophysics Data System (ADS)
ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.
2015-03-01
Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.
Barba-Montoya, Jose; Dos Reis, Mario; Yang, Ziheng
2017-09-01
Fossil calibrations are the utmost source of information for resolving the distances between molecular sequences into estimates of absolute times and absolute rates in molecular clock dating analysis. The quality of calibrations is thus expected to have a major impact on divergence time estimates even if a huge amount of molecular data is available. In Bayesian molecular clock dating, fossil calibration information is incorporated in the analysis through the prior on divergence times (the time prior). Here, we evaluate three strategies for converting fossil calibrations (in the form of minimum- and maximum-age bounds) into the prior on times, which differ according to whether they borrow information from the maximum age of ancestral nodes and minimum age of descendent nodes to form constraints for any given node on the phylogeny. We study a simple example that is analytically tractable, and analyze two real datasets (one of 10 primate species and another of 48 seed plant species) using three Bayesian dating programs: MCMCTree, MrBayes and BEAST2. We examine how different calibration strategies, the birth-death process, and automatic truncation (to enforce the constraint that ancestral nodes are older than descendent nodes) interact to determine the time prior. In general, truncation has a great impact on calibrations so that the effective priors on the calibration node ages after the truncation can be very different from the user-specified calibration densities. The different strategies for generating the effective prior also had considerable impact, leading to very different marginal effective priors. Arbitrary parameters used to implement minimum-bound calibrations were found to have a strong impact upon the prior and posterior of the divergence times. Our results highlight the importance of inspecting the joint time prior used by the dating program before any Bayesian dating analysis. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto
2014-07-15
Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.« less
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
NASA Astrophysics Data System (ADS)
Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.
2016-04-01
Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
Radiometric Characterization of the IKONOS, QuickBird, and OrbView-3 Sensors
NASA Technical Reports Server (NTRS)
Holekamp, Kara
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can better understand their properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, satellite at-sensor radiance values were compared to those estimated by each independent team member to determine the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these commercially available high spatial resolution sensors' absolute calibration values.
Aquarius Radiometer Performance: Early On-Orbit Calibration and Results
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; LeVine, David M.; Yueh, Simon H.; Wentz, Frank; Ruf, Christopher
2012-01-01
The Aquarius/SAC-D observatory was launched into a 657-km altitude, 6-PM ascending node, sun-synchronous polar orbit from Vandenberg, California, USA on June 10, 2011. The Aquarius instrument was commissioned two months after launch and began operating in mission mode August 25. The Aquarius radiometer meets all engineering requirements, exhibited initial calibration biases within expected error bars, and continues to operate well. A review of the instrument design, discussion of early on-orbit performance and calibration assessment, and investigation of an on-going calibration drift are summarized in this abstract.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Lafon, Belen; Henin, Simon; Huang, Yu; Friedman, Daniel; Melloni, Lucia; Thesen, Thomas; Doyle, Werner; Buzsáki, György; Devinsky, Orrin; Parra, Lucas C; Liu, Anli
2018-02-28
It has come to our attention that we did not specify whether the stimulation magnitudes we report in this Article are peak amplitudes or peak-to-peak. All references to intensity given in mA in the manuscript refer to peak-to-peak amplitudes, except in Fig. 2, where the model is calibrated to 1 mA peak amplitude, as stated. In the original version of the paper we incorrectly calibrated the computational models to 1 mA peak-to-peak, rather than 1 mA peak amplitude. This means that we divided by a value twice as large as we should have. The correct estimated fields are therefore twice as large as shown in the original Fig. 2 and Supplementary Figure 11. The corrected figures are now properly calibrated to 1 mA peak amplitude. Furthermore, the sentence in the first paragraph of the Results section 'Intensity ranged from 0.5 to 2.5 mA (current density 0.125-0.625 mA mA/cm 2 ), which is stronger than in previous reports', should have read 'Intensity ranged from 0.5 to 2.5 mA peak to peak (peak current density 0.0625-0.3125 mA/cm 2 ), which is stronger than in previous reports.' These errors do not affect any of the Article's conclusions.
SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin
The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.
Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan
2016-05-01
In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V.V.; Takacs, P.; Anderson, E.H.
A modulation transfer function (MTF) calibration method based on binary pseudorandom (BPR) gratings and arrays has been proven to be an effective MTF calibration method for interferometric microscopes and a scatterometer. Here we report on a further expansion of the application range of the method. We describe the MTF calibration of a 6 in. phase shifting Fizeau interferometer. Beyond providing a direct measurement of the interferometer's MTF, tests with a BPR array surface have revealed an asymmetry in the instrument's data processing algorithm that fundamentally limits its bandwidth. Moreover, the tests have illustrated the effects of the instrument's detrending andmore » filtering procedures on power spectral density measurements. The details of the development of a BPR test sample suitable for calibration of scanning and transmission electron microscopes are also presented. Such a test sample is realized as a multilayer structure with the layer thicknesses of two materials corresponding to the BPR sequence. The investigations confirm the universal character of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.« less
Establishing a NORM based radiation calibration facility.
Wallace, J
2016-05-01
An environmental radiation calibration facility has been constructed by the Radiation and Nuclear Sciences unit of Queensland Health at the Forensic and Scientific Services Coopers Plains campus in Brisbane. This facility consists of five low density concrete pads, spiked with a NORM source, to simulate soil and effectively provide a number of semi-infinite uniformly distributed sources for improved energy response calibrations of radiation equipment used in NORM measurements. The pads have been sealed with an environmental epoxy compound to restrict radon loss and so enhance the quality of secular equilibrium achieved. Monte Carlo models (MCNP),used to establish suitable design parameters and identify appropriate geometric correction factors linking the air kerma measured above these calibration pads to that predicted for an infinite plane using adjusted ICRU53 data, are discussed. Use of these correction factors as well as adjustments for cosmic radiation and the impact of surrounding low levels of NORM in the soil, allows for good agreement between the radiation fields predicted and measured above the pads at both 0.15 m and 1 m. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Yuan-Ho
2017-05-01
In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.
The Second "Ring of Towers": Over-sampling the Mid Continent Intensive region CO2 mixing ratio?
NASA Astrophysics Data System (ADS)
Richardson, S.; Miles, N.; Davis, K.; Crosson, E.; Denning, S.; Zupanksi, D.; Uliasz, M.
2007-12-01
A central barrier preventing the scientific community from understanding the carbon balance of the continent is methodological; it is technically difficult to bridge the gap in spatial scales that exists between the detailed understanding of ecological processes that can be gathered via intensive local field study, and the overarching but mechanistically poor understanding of the global carbon cycle that is gained by analyzing the atmospheric CO2 budget. The NACP's Midcontinental Intensive (MCI) study seeks to bridge this gap by conducting a rigorous methodological test of our ability to measure the terrestrial carbon balance of the upper Midwest. A critical need in bridging this gap is increased data density. A primary goal of the project is to increase the regional atmospheric CO2 data density so that 1) atmospheric inversions can derive well-constrained regional ecosystem carbon flux estimates and 2) the trade off between data density and accuracy of the flux estimates can be determined quantitatively using field observations, thus providing guidance to future observational network designs. Our work adds a regional network of five communications-tower based atmospheric CO2 observations to the planned long-term atmospheric CO2 observing network (tall towers, flux towers and aircraft profiles) in the midcontinent intensive region. Measurements began in April-June 2007, If the measurements are shown to be spatially dense enough to over sample the CO2 mixing ratio, the experiment will provide an upper bounds on the density of measurements required to produce the most accurate flux possible with current atmospheric inversions. The five sites for "Ring 2" and deployment dates are Centerville, IA (Apr 07), Round Lake, MN (May 07), Kewanee, IL (Apr 07), Mead, NE (Apr 07), Galesville, WI (June 07). Two heights are sampled at each tower (30 m AGL and between 110 and 140 m AGL). More details are available at www.ring2.psu.edu. In addition, two systems in PSU's network of well-calibrated CO2 mixing ratio measurements deployed at Ameriflux towers are within the midcontinental region: Ozark, MO (30 m AGL) and Mead, NE (3-6 m AGL) (www.amerifluxco2.psu.edu). The instruments chosen for the Ring 2 deployment are Picarro Inc., Cavity Ring-Down Spectroscopy (CRDS) instruments. One advantage of the CRDS instruments is the reduced need for calibration compared to the systems used in PSU's Ameriflux CO2 network which are calibrated every four hours using four calibration tanks. Although the long-term stability is not exactly known, tests have shown accuracy to within 0.2 ppm on a monthly time scale without additional calibration. Preliminary results show spatial differences in daytime CO2 across the ring that are as large as 40 ppm, and highly variable in time. We will present observations and preliminary interpretation of these data.
40 CFR 86.884-11 - Instrument checks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... collection equipment response of zero; (3) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 86.884-11 - Instrument checks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... collection equipment response of zero; (3) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 86.884-11 - Instrument checks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... collection equipment response of zero; (3) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 86.884-11 - Instrument checks.
Code of Federal Regulations, 2013 CFR
2013-07-01
... collection equipment response of zero; (3) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
USDA-ARS?s Scientific Manuscript database
Directed soil sampling based on geospatial measurements of apparent soil electrical conductivity (ECa) is a potential means of characterizing the spatial variability of any soil property that influences ECa including soil salinity, water content, texture, bulk density, organic matter, and cation exc...
Natural geochemical analogues of the near field of high-level nuclear waste repositories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apps, J.A.
1995-09-01
United States practice has been to design high-level nuclear waste (HLW) geological repositories with waste densities sufficiently high that repository temperatures surrounding the waste will exceed 100{degrees}C and could reach 250{degrees}C. Basalt and devitrified vitroclastic tuff are among the host rocks considered for waste emplacement. Near-field repository thermal behavior and chemical alteration in such rocks is expected to be similar to that observed in many geothermal systems. Therefore, the predictive modeling required for performance assessment studies of the near field could be validated and calibrated using geothermal systems as natural analogues. Examples are given which demonstrate the need for refinementmore » of the thermodynamic databases used in geochemical modeling of near-field natural analogues and the extent to which present models can predict conditions in geothermal fields.« less
Melville, Sarah; Teskey, Robert; Philip, Shona; Simpson, Jeremy A; Lutchmedial, Sohrab
2018-01-01
Background Clinical guidelines recommend monitoring of blood pressure at home using an automatic blood pressure device for the management of hypertension. Devices are not often calibrated against direct blood pressure measures, leaving health care providers and patients with less reliable information than is possible with current technology. Rigorous assessments of medical devices are necessary for establishing clinical utility. Objective The purpose of our study was 2-fold: (1) to assess the validity and perform iterative calibration of indirect blood pressure measurements by a noninvasive wrist cuff blood pressure device in direct comparison with simultaneously recorded peripheral and central intra-arterial blood pressure measurements and (2) to assess the validity of the measurements thereafter of the noninvasive wrist cuff blood pressure device in comparison with measurements by a noninvasive upper arm blood pressure device to the Canadian hypertension guidelines. Methods The cloud-based blood pressure algorithms for an oscillometric wrist cuff device were iteratively calibrated to direct pressure measures in 20 consented patient participants. We then assessed measurement validity of the device, using Bland-Altman analysis during routine cardiovascular catheterization. Results The precalibrated absolute mean difference between direct intra-arterial to wrist cuff pressure measurements were 10.8 (SD 9.7) for systolic and 16.1 (SD 6.3) for diastolic. The postcalibrated absolute mean difference was 7.2 (SD 5.1) for systolic and 4.3 (SD 3.3) for diastolic pressures. This is an improvement in accuracy of 33% systolic and 73% diastolic with a 48% reduction in the variability for both measures. Furthermore, the wrist cuff device demonstrated similar sensitivity in measuring high blood pressure compared with the direct intra-arterial method. The device, when calibrated to direct aortic pressures, demonstrated the potential to reduce a treatment gap in high blood pressure measurements. Conclusions The systolic pressure measurements of the wrist cuff have been iteratively calibrated using gold standard central (ascending aortic) pressure. This improves the accuracy of the indirect measures and potentially reduces the treatment gap. Devices that undergo auscultatory (indirect) calibration for licensing can be greatly improved by additional iterative calibration via intra-arterial (direct) measures of blood pressure. Further clinical trials with repeated use of the device over time are needed to assess the reliability of the device in accordance with current and evolving guidelines for informed decision making in the management of hypertension. Trial Registration ClinicalTrials.gov NCT03015363; https://clinicaltrials.gov/ct2/show/NCT03015363 (Archived by WebCite at http://www.webcitation.org/6xPZgseYS) PMID:29695375
NASA Astrophysics Data System (ADS)
Kroonblawd, Matthew; Goldman, Nir
2017-06-01
First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (<10 ps), but the effects on slow reactions and the free energy surface are not well-known. We present a force matching approach to improve the chemical accuracy of DFTB. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Kroonblawd, Matthew; Goldman, Nir
First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (<10 ps), but the effects on slow chemistry and the free energy surface are not well-known. We present a force matching approach to increase the accuracy of DFTB predictions for free energy surfaces. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials without a priori knowledge of transition states. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT results for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Biases in Multicenter Longitudinal PET Standardized Uptake Value Measurements1
Doot, Robert K; Pierce, Larry A; Byrd, Darrin; Elston, Brian; Allberg, Keith C; Kinahan, Paul E
2014-01-01
This study investigates measurement biases in longitudinal positron-emission tomography/computed tomography (PET/CT) studies that are due to instrumentation variability including human error. Improved estimation of variability between patient scans is of particular importance for assessing response to therapy and multicenter trials. We used National Institute of Standards and Technology-traceable calibration methodology for solid germanium-68/gallium-68 (68Ge/68Ga) sources used as surrogates for fluorine-18 (18F) in radionuclide activity calibrators. One cross-calibration kit was constructed for both dose calibrators and PET scanners using the same 9-month half-life batch of 68Ge/68Ga in epoxy. Repeat measurements occurred in a local network of PET imaging sites to assess standardized uptake value (SUV) errors over time for six dose calibrators from two major manufacturers and for six PET/CT scanners from three major manufacturers. Bias in activity measures by dose calibrators ranged from -50% to 9% and was relatively stable over time except at one site that modified settings between measurements. Bias in activity concentration measures by PET scanners ranged from -27% to 13% with a median of 174 days between the six repeat scans (range, 29 to 226 days). Corresponding errors in SUV measurements ranged from -20% to 47%. SUV biases were not stable over time with longitudinal differences for individual scanners ranging from -11% to 59%. Bias in SUV measurements varied over time and between scanner sites. These results suggest that attention should be paid to PET scanner calibration for longitudinal studies and use of dose calibrator and scanner cross-calibration kits could be helpful for quality assurance and control. PMID:24772207
Ishino, A; Takahashi, T; Suzuki, J; Nakazawa, Y; Iwabuchi, T; Tajima, M
2014-11-01
Androgenetic alopecia (AGA) is the most common type of baldness in men. The balding process is associated with the gradual miniaturization of hair follicles and successive hair loss. However, the relative contributions of hair density and diameter to AGA are still unclear. Hair density and hair diameter were investigated in Japanese men with or without AGA to elucidate the importance of these factors in the balding process. Male Japanese subjects with or without AGA (n = 369) were included in this study. Hair appearance at the vertex was evaluated by comparison with a series of standard photographs. Hair density was measured using a phototrichogram-based videomicroscopy technique, and hair diameter was assessed by comparison with a series of calibrated threads on the phototrichogram image. All subjects with AGA were ≥ 25 years of age. The mean percentage of thick hairs (> 80 μm) in all subjects with AGA was significantly lower than that in subjects without AGA aged ≥ 25 years (P < 0·01), but the mean percentage of vellus hairs (< 40 μm) in subjects with AGA was significantly higher (P < 0·001). By contrast, the mean density of the hair in all patients with AGA did not significantly differ from the density of those without AGA aged ≥ 25 years. However, the mean density of the hair in subjects without AGA aged < 25 years was significantly higher than that of both subjects without AGA aged ≥ 25 years (P < 0·001) and all subjects with AGA. Hair loss in men with AGA results mainly from the miniaturization of hair follicles rather than the loss of hair (shedding), at least for individuals who are ≥ 25 years of age and present with AGA. © 2014 British Association of Dermatologists.
Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan
2010-02-01
The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.
The development, distribution and density of the PMCA2 calcium pump in rat cochlear hair cells
Chen, Qingguo; Mahendrasingam, Shanthini; Tickle, Jacqueline A.; Hackney, Carole M.; Furness, David N.; Fettiplace, Robert
2012-01-01
Calcium is tightly regulated in cochlear outer hair cells (OHCs). It enters mainly via mechanotransducer (MT) channels and is extruded by the PMCA2 isoform of the plasma membrane calcium ATPase, mutations in which cause hearing loss. To assess how pump expression matches the demands of Ca2+ homeostasis, the distribution of PMCA2 at different cochlear locations during development was quantified using immunofluorescence and post-embedding immunogold labeling. The PMCA2 isoform was confined to stereociliary bundles, first appearing at the base of the cochlea around post-natal day 0 (P0) followed by the middle and then the apex by P3, and was unchanged after P8. The developmental appearance matches maturation of the MT channels in rat OHCs. High-resolution immunogold labeling in adult rats showed PMCA2 was distributed along the membranes of all three rows of OHC stereocilia at similar densities and at about a quarter the density in IHC stereocilia. The difference between OHCs and inner hair cells (IHCs) is similar to the ratio of their MT channel resting open probabilities. Gold particle counts revealed no difference in PMCA2 density between low- and high-frequency OHC bundles despite larger MT currents in high-frequency OHCs. The PMCA2 density in OHC stereocilia was determined in low- and high-frequency regions from calibration of immunogold particle counts as 2200/μm2 from which an extrusion rate of ~200 ions·s−1 per pump was inferred. The limited ability of PMCA2 to extrude the Ca2+ load through MT channels may constitute a major cause of OHC vulnerability and high-frequency hearing loss. PMID:22672315
Freedman, Laurence S; Commins, John M; Moler, James E; Willett, Walter; Tinker, Lesley F; Subar, Amy F; Spiegelman, Donna; Rhodes, Donna; Potischman, Nancy; Neuhouser, Marian L; Moshfegh, Alanna J; Kipnis, Victor; Arab, Lenore; Prentice, Ross L
2015-04-01
We pooled data from 5 large validation studies (1999-2009) of dietary self-report instruments that used recovery biomarkers as referents, to assess food frequency questionnaires (FFQs) and 24-hour recalls (24HRs). Here we report on total potassium and sodium intakes, their densities, and their ratio. Results were similar by sex but were heterogeneous across studies. For potassium, potassium density, sodium, sodium density, and sodium:potassium ratio, average correlation coefficients for the correlation of reported intake with true intake on the FFQs were 0.37, 0.47, 0.16, 0.32, and 0.49, respectively. For the same nutrients measured with a single 24HR, they were 0.47, 0.46, 0.32, 0.31, and 0.46, respectively, rising to 0.56, 0.53, 0.41, 0.38, and 0.60 for the average of three 24HRs. Average underreporting was 5%-6% with an FFQ and 0%-4% with a single 24HR for potassium but was 28%-39% and 4%-13%, respectively, for sodium. Higher body mass index was related to underreporting of sodium. Calibration equations for true intake that included personal characteristics provided improved prediction, except for sodium density. In summary, self-reports capture potassium intake quite well but sodium intake less well. Using densities improves the measurement of potassium and sodium on an FFQ. Sodium:potassium ratio is measured much better than sodium itself on both FFQs and 24HRs. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Lyons, Valerie J. (Technical Monitor)
2002-01-01
The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentration in the stratosphere. The present correction procedure applies a 1% increase to the measured Isc values. High band-gap cells are more sensitive to ozone adsorbed wavelengths so it has become important to reassess the correction technique. This paper evaluates the ozone correction to be 1+{O3}sup Fo, where Fo is 29.5x10(exp-6)/d.u. for a Silicon solar cell and 42.2xl0(exp -6)/d.u. for a GaAs cell. Results will be presented for high band-gap cells. A comparison with flight data indicates that this method of correcting for the ozone density improves the uncertainty of AM0 Isc to 0.5%.
Dudev, Todor; Devereux, Mike; Meuwly, Markus; Lim, Carmay; Piquemal, Jean-Philip; Gresh, Nohad
2015-02-15
The alkali metal cations in the series Li(+)-Cs(+) act as major partners in a diversity of biological processes and in bioinorganic chemistry. In this article, we present the results of their calibration in the context of the SIBFA polarizable molecular mechanics/dynamics procedure. It relies on quantum-chemistry (QC) energy-decomposition analyses of their monoligated complexes with representative O-, N-, S-, and Se- ligands, performed with the aug-cc-pVTZ(-f) basis set at the Hartree-Fock level. Close agreement with QC is obtained for each individual contribution, even though the calibration involves only a limited set of cation-specific parameters. This agreement is preserved in tests on polyligated complexes with four and six O- ligands, water and formamide, indicating the transferability of the procedure. Preliminary extensions to density functional theory calculations are reported. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tamaru, S.; Kubota, H.; Yakushiji, K.; Fukushima, A.; Yuasa, S.
2017-11-01
This work presents a technique to calibrate the spin torque oscillator (STO) measurement system by utilizing the whiteness of shot noise. The raw shot noise spectrum in a magnetic tunnel junction based STO in the microwave frequency range is obtained by first subtracting the baseline noise, and then excluding the field dependent mag-noise components reflecting the thermally excited spin wave resonances. As the shot noise is guaranteed to be completely white, the total gain of the signal path should be proportional to the shot noise spectrum obtained by the above procedure, which allows for an accurate gain calibration of the system and a quantitative determination of each noise power. The power spectral density of the shot noise as a function of bias voltage obtained by this technique was compared with a theoretical calculation, which showed excellent agreement when the Fano factor was assumed to be 0.99.
Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong
2018-01-01
Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.
High-throughput accurate-wavelength lens-based visible spectrometer.
Bell, Ronald E; Scotti, Filippo
2010-10-01
A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration.
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
Gearba, Raluca I.; Mueller, Kory M.; Veneman, Peter A.; ...
2015-05-09
Owing to its high conductivity, graphene holds promise as an electrode for energy devices such as batteries and photovoltaics. However, to this end, the work function and doping levels in graphene need to be precisely tuned. One promising route for modifying graphene’s electronic properties is via controlled covalent electrochemical grafting of molecules. We show that by employing diaryliodonium salts instead of the commonly used diazonium salts, spontaneous functionalization is avoided. This then allows for precise tuning of the grafting density. Moreover, by employing bis(4-nitrophenyl)iodonium(III) tetrafluoroborate (DNP) salt calibration curves, the surface functionalization density (coverage) of glassy carbon was controlled usingmore » cyclic voltammetry in varying salt concentrations. These electro-grafting conditions and calibration curves translated directly over to modifying single layer epitaxial graphene substrates (grown on insulating 6H-SiC (0 0 0 1)). In addition to quantifying the functionalization densities using electrochemical methods, samples with low grafting densities were characterized by low-temperature scanning tunneling microscopy (LT-STM). We show that the use of buffer-layer free graphene substrates is required for clear observation of the nitrophenyl modifications. Furthermore, atomically-resolved STM images of single site modifications were obtained, showing no preferential grafting at defect sites or SiC step edges as supposed previously in the literature. Most of the grafts exhibit threefold symmetry, but occasional extended modifications (larger than 4 nm) were observed as well.« less
Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric
2010-04-01
The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.
Courtès, Franck; Ebel, Bruno; Guédon, Emmanuel; Marc, Annie
2016-05-01
to develop a new strategy combining near-infrared (NIR) and dielectric spectroscopies for real-time monitoring and in-depth characterizing populations of Chinese hamster ovary cells throughout cultures performed in bioreactors. Spectral data processing was based on off-line analyses of the cells, including trypan blue exclusion method, and lactate dehydrogenase activity (LDH). Viable cell density showed a linear correlation with permittivity up to 6 × 10(6) cells ml(-1), while a logarithmic correlation was found between non-lysed dead cell density and conductivity up to 10(7) cells ml(-1). Additionally, partial least square technique was used to develop a calibration model of the supernatant LDH activity based on online NIR spectra with a RMSEC of 55 U l(-1). Considering the LDH content of viable cells measured to be 110 U per 10(9) cells, the lysed dead cell density could be then estimated. These calibration models provided real-time prediction accuracy (R(2) ≥ 0.95) for the three types of cell populations. The high potential of a dual spectroscopy strategy to enhance the online bioprocesses characterization is demonstrated since it allows the simultaneous determination of viable, dead and lysed cell populations in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boersma, C.; Bregman, J.; Allamandola, L. J., E-mail: Christiaan.Boersma@nasa.gov
2015-06-10
Polycyclic aromatic hydrocarbon (PAH) emission in the Spitzer/IRS spectral map of the northwest photon dominated region (PDR) in NGC 7023 is analyzed. Here, results from fitting the 5.2–14.5 μm spectrum at each pixel using exclusively PAH spectra from the NASA Ames PAH IR Spectroscopic Database (www.astrochem.org/pahdb/) and observed PAH band strength ratios, determined after isolating the PAH bands, are combined. This enables the first quantitative and spectrally consistent calibration of PAH charge proxies. Calibration is straightforward because the 6.2/11.2 μm PAH band strength ratio varies linearly with the ionized fraction (PAH ionization parameter) as determined from the intrinsic properties ofmore » the individual PAHs comprising the database. This, in turn, can be related to the local radiation field, electron density, and temperature. From these relations diagnostic templates are developed to deduce the PAH ionization fraction and astronomical environment in other objects. The commonly used 7.7/11.2 μm PAH band strength ratio fails as a charge proxy over a significant fraction of the nebula. The 11.2/12.7 μm PAH band strength ratio, commonly used as a PAH erosion indicator, is revealed to be a better tracer for PAH charge across NGC 7023. Attempting to calibrate the 12.7/11.2 μm PAH band strength ratio against the PAH hydrogen adjacency ratio (duo+trio)/solo is, unexpectedly, anti-correlated. This work both validates and extends the results from Paper I and Paper II.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Assessment of Radiometer Calibration with GPS Radio Occultation for the MiRaTA CubeSat Mission.
Marinan, Anne D; Cahoy, Kerri L; Bishop, Rebecca L; Lui, Susan S; Bardeen, James R; Mulligan, Tamitha; Blackwell, William J; Leslie, R Vincent; Osaretin, Idahosa; Shields, Michael
2016-12-01
The Microwave Radiometer Technology Acceleration (MiRaTA) is a 3U CubeSat mission sponsored by the NASA Earth Science Technology Office (ESTO). The science payload on MiRaTA consists of a tri-band microwave radiometer and Global Positioning System (GPS) radio occultation (GPSRO) sensor. The microwave radiometer takes measurements of all-weather temperature (V-band, 50-57 GHz), water vapor (G-band, 175-191 GHz), and cloud ice (G-band, 205 GHz) to provide observations used to improve weather forecasting. The Aerospace Corporation's GPSRO experiment, called the Compact TEC (Total Electron Content) and Atmospheric GPS Sensor (CTAGS), measures profiles of temperature and pressure in the upper troposphere/lower stratosphere (∼20 km) and electron density in the ionosphere (over 100 km). The MiRaTA mission will validate new technologies in both passive microwave radiometry and GPS radio occultation: (1) new ultra-compact and low-power technology for multi-channel and multi-band passive microwave radiometers, (2) the application of a commercial off the shelf (COTS) GPS receiver and custom patch antenna array technology to obtain neutral atmospheric GPSRO retrieval from a nanosatellite, and (3) a new approach to spaceborne microwave radiometer calibration using adjacent GPSRO measurements. In this paper, we focus on objective (3), developing operational models to meet a mission goal of 100 concurrent radiometer and GPSRO measurements, and estimating the temperature measurement precision for the CTAGS instrument based on thermal noise. Based on an analysis of thermal noise of the CTAGS instrument, the expected temperature retrieval precision is between 0.17 K and 1.4 K, which supports the improvement of radiometric calibration to 0.25 K.
Harris, Liam W.; Davies, T. Jonathan
2016-01-01
Explaining the uneven distribution of species richness across the branches of the tree of life has been a major challenge for evolutionary biologists. Advances in phylogenetic reconstruction, allowing the generation of large, well-sampled, phylogenetic trees have provided an opportunity to contrast competing hypotheses. Here, we present a new time-calibrated phylogeny of seed plant families using Bayesian methods and 26 fossil calibrations. While there are various published phylogenetic trees for plants which have a greater density of species sampling, we are still a long way from generating a complete phylogeny for all ~300,000+ plants. Our phylogeny samples all seed plant families and is a useful tool for comparative analyses. We use this new phylogenetic hypothesis to contrast two alternative explanations for differences in species richness among higher taxa: time for speciation versus ecological limits. We calculated net diversification rate for each clade in the phylogeny and assessed the relationship between clade age and species richness. We then fit models of speciation and extinction to individual branches in the tree to identify major rate-shifts. Our data suggest that the majority of lineages are diversifying very slowly while a few lineages, distributed throughout the tree, are diversifying rapidly. Diversification is unrelated to clade age, no matter the age range of the clades being examined, contrary to both the assumption of an unbounded lineage increase through time, and the paradigm of fixed ecological limits. These findings are consistent with the idea that ecology plays a role in diversification, but rather than imposing a fixed limit, it may have variable effects on per lineage diversification rates through time. PMID:27706173
Assessment of Radiometer Calibration with GPS Radio Occultation for the MiRaTA CubeSat Mission
Marinan, Anne D.; Cahoy, Kerri L.; Bishop, Rebecca L.; Lui, Susan S.; Bardeen, James R.; Mulligan, Tamitha; Blackwell, William J.; Leslie, R. Vincent; Osaretin, Idahosa; Shields, Michael
2017-01-01
The Microwave Radiometer Technology Acceleration (MiRaTA) is a 3U CubeSat mission sponsored by the NASA Earth Science Technology Office (ESTO). The science payload on MiRaTA consists of a tri-band microwave radiometer and Global Positioning System (GPS) radio occultation (GPSRO) sensor. The microwave radiometer takes measurements of all-weather temperature (V-band, 50-57 GHz), water vapor (G-band, 175-191 GHz), and cloud ice (G-band, 205 GHz) to provide observations used to improve weather forecasting. The Aerospace Corporation's GPSRO experiment, called the Compact TEC (Total Electron Content) and Atmospheric GPS Sensor (CTAGS), measures profiles of temperature and pressure in the upper troposphere/lower stratosphere (∼20 km) and electron density in the ionosphere (over 100 km). The MiRaTA mission will validate new technologies in both passive microwave radiometry and GPS radio occultation: (1) new ultra-compact and low-power technology for multi-channel and multi-band passive microwave radiometers, (2) the application of a commercial off the shelf (COTS) GPS receiver and custom patch antenna array technology to obtain neutral atmospheric GPSRO retrieval from a nanosatellite, and (3) a new approach to spaceborne microwave radiometer calibration using adjacent GPSRO measurements. In this paper, we focus on objective (3), developing operational models to meet a mission goal of 100 concurrent radiometer and GPSRO measurements, and estimating the temperature measurement precision for the CTAGS instrument based on thermal noise. Based on an analysis of thermal noise of the CTAGS instrument, the expected temperature retrieval precision is between 0.17 K and 1.4 K, which supports the improvement of radiometric calibration to 0.25 K. PMID:28828144
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Novel Payload Architectures for LISA
NASA Astrophysics Data System (ADS)
Johann, Ulrich A.; Gath, Peter F.; Holota, Wolfgang; Schulte, Hans Reiner; Weise, Dennis
2006-11-01
As part of the current LISA Mission Formulation Study, and based on prior internal investigations, Astrium Germany has defined and preliminary assessed novel payload architectures, potentially reducing overall complexity and improving budgets and costs. A promising concept is characterized by a single active inertial sensor attached to a single optical bench and serving both adjacent interferometer arms via two rigidly connected off-axis telescopes. The in-plane triangular constellation ``breathing angle'' compensation is accomplished by common telescope in-field of view pointing actuation of the transmit/received beams line of sight. A dedicated actuation mechanism located on the optical bench is required in addition to the on bench actuators for differential pointing of the transmit and receive direction perpendicular to the constellation plane. Both actuators operate in a sinusoidal yearly period. A technical challenge is the actuation mechanism pointing jitter and the monitoring and calibration of the laser phase walk which occurs while changing the optical path inside the optical assembly during re-pointing. Calibration or monitoring of instrument internal phase effects e.g. by a laser metrology truss derived from the existing interferometry is required. The architecture exploits in full the two-step interferometry (strap down) concept, separating functionally inter spacecraft and intra-spacecraft interferometry (reference mass laser metrology degrees of freedom sensing). The single test mass is maintained as cubic, but in free-fall in the lateral degrees of freedom within the constellation plane. Also the option of a completely free spherical test mass with full laser interferometer readout has been conceptually investigated. The spherical test mass would rotate slowly, and would be allowed to tumble. Imperfections in roundness and density would be calibrated from differential wave front sensing in a tetrahedral arrangement, supported by added attitude information via a grid of tick marks etched onto the surface and monitored by the laser readout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
Performance assessment of FY-3C/MERSI on early orbit
NASA Astrophysics Data System (ADS)
Hu, Xiuqing; Xu, Na; Wu, Ronghua; Chen, Lin; Min, Min; Wang, Ling; Xu, Hanlie; Sun, Ling; Yang, Zhongdong; Zhang, Peng
2014-11-01
FY-3C/MERSI has some remarkable improvements compared to the previous MERSIs including better spectral response function (SRF) consistency of different detectors within one band, increasing the capability of lunar observation by space view (SV) and the improvement of radiometric response stability of solar bands. During the In-orbit verification (IOV) commissioning phase, early results that indicate the MERSI representative performance were derived, including the signal noise ratio (SNR), dynamic range, MTF, B2B registration, calibration bias and instrument stability. The SNRs at the solar bands (Bands 1-4 and 6-20) was largely beyond the specifications except for two NIR bands. The in-flight calibration and verification for these bands are also heavily relied on the vicarious techniques such as China radiometric calibration sites(CRCS), cross-calibration, lunar calibration, DCC calibration, stability monitoring using Pseudo Invariant Calibration Sites (PICS) and multi-site radiance simulation. This paper will give the results of the above several calibration methods and monitoring the instrument degradation in early on-orbit time.
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
NASA Astrophysics Data System (ADS)
Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.
2018-06-01
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
Davis, C.; Rozo, E.; Roodman, A.; ...
2018-03-26
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, C.; Rozo, E.; Roodman, A.
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S. N.; Revet, G.; Fuchs, J.
Radiochromic films (RCF) are commonly used in dosimetry for a wide range of radiation sources (electrons, protons, and photons) for medical, industrial, and scientific applications. They are multi-layered, which includes plastic substrate layers and sensitive layers that incorporate a radiation-sensitive dye. Quantitative dose can be retrieved by digitizing the film, provided that a prior calibration exists. Here, to calibrate the newly developed EBT3 and HDv2 RCFs from Gafchromic™, we used the Stanford Medical LINAC to deposit in the films various doses of 10 MeV photons, and by scanning the films using three independent EPSON Precision 2450 scanners, three independent EPSONmore » V750 scanners, and two independent EPSON 11000XL scanners. The films were scanned in separate RGB channels, as well as in black and white, and film orientation was varied. We found that the green channel of the RGB scan and the grayscale channel are in fact quite consistent over the different models of the scanner, although this comes at the cost of a reduction in sensitivity (by a factor ∼2.5 compared to the red channel). To allow any user to extend the absolute calibration reported here to any other scanner, we furthermore provide a calibration curve of the EPSON 2450 scanner based on absolutely calibrated, commercially available, optical density filters.« less