Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing
2017-08-05
The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.
Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut
2005-01-01
Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
Development of composite calibration standard for quantitative NDE by ultrasound and thermography
NASA Astrophysics Data System (ADS)
Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.
2018-04-01
Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.
Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika
2013-02-01
The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
New Teff and [Fe/H] spectroscopic calibration for FGK dwarfs and GK giants
NASA Astrophysics Data System (ADS)
Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.
2016-10-01
Context. The ever-growing number of large spectroscopic survey programs has increased the importance of fast and reliable methods with which to determine precise stellar parameters. Some of these methods are highly dependent on correct spectroscopic calibrations. Aims: The goal of this work is to obtain a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for a wide range of stellar spectral types. Methods: We used spectra from a joint sample of 708 stars, compiled from 451 FGK dwarfs and 257 GK-giant stars. We used homogeneously determined spectroscopic stellar parameters to derive temperature calibrations using a set of selected EW line-ratios, and [Fe/H] calibrations using a set of selected Fe I lines. Results: We have derived 322 EW line-ratios and 100 Fe I lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Dambergs, Robert G; Mercurio, Meagan D; Kassara, Stella; Cozzolino, Daniel; Smith, Paul A
2012-06-01
Information relating to tannin concentration in grapes and wine is not currently available simply and rapidly enough to inform decision-making by grape growers, winemakers, and wine researchers. Spectroscopy and chemometrics have been implemented for the analysis of critical grape and wine parameters and offer a possible solution for rapid tannin analysis. We report here the development and validation of an ultraviolet (UV) spectral calibration for the prediction of tannin concentration in red wines. Such spectral calibrations reduce the time and resource requirements involved in measuring tannins. A diverse calibration set (n = 204) was prepared with samples of Australian wines of five varieties (Cabernet Sauvignon, Shiraz, Merlot, Pinot Noir, and Durif), from regions spanning the wine grape growing areas of Australia, with varying climate and soils, and with vintages ranging from 1991 to 2007. The relationship between tannin measured by the methyl cellulose precipitation (MCP) reference method at 280 nm and tannin predicted with a multiple linear regression (MLR) calibration, using ultraviolet (UV) absorbance at 250, 270, 280, 290, and 315 nm, was strong (r(2)val = 0.92; SECV = 0.20 g/L). An independent validation set (n = 85) was predicted using the MLR algorithm developed with the calibration set and gave confidence in the ability to predict new samples, independent of the samples used to prepare the calibration (r(2)val = 0.94; SEP = 0.18 g/L). The MLR algorithm could also predict tannin in fermenting wines (r(2)val = 0.76; SEP = 0.18 g/L), but worked best from the second day of ferment on. This study also explored instrument-to-instrument transfer of a spectral calibration for MCP tannin. After slope and bias adjustments of the calibration, efficient calibration transfer to other laboratories was clearly demonstrated, with all instruments in the study effectively giving identical results on a transfer set.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Classification of plum spirit drinks by synchronous fluorescence spectroscopy.
Sádecká, J; Jakubíková, M; Májek, P; Kleinová, A
2016-04-01
Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, X.; Tian, F.; Telford, R.; Ni, J.; Xu, Q.; Chen, F.; Liu, X.; Stebich, M.; Zhao, Y.; Herzschuh, U.
2017-12-01
Pollen-based quantitative reconstructions of past climate variables is a standard palaeoclimatic approach. Despite knowing that the spatial extent of the calibration-set affects the reconstruction result, guidance is lacking as to how to determine a suitable spatial extent of the pollen-climate calibration-set. In this study, past mean annual precipitation (Pann) during the Holocene (since 11.5 cal ka BP) is reconstructed repeatedly for pollen records from Qinghai Lake (36.7°N, 100.5°E; north-east Tibetan Plateau), Gonghai Lake (38.9°N, 112.2°E; north China) and Sihailongwan Lake (42.3°N, 126.6°E; north-east China) using calibration-sets of varying spatial extents extracted from the modern pollen dataset of China and Mongolia (2559 sampling sites and 168 pollen taxa in total). Results indicate that the spatial extent of the calibration-set has a strong impact on model performance, analogue quality and reconstruction diagnostics (absolute value, range, trend, optimum). Generally, these effects are stronger with the modern analogue technique (MAT) than with weighted averaging partial least squares (WA-PLS). With respect to fossil spectra from northern China, the spatial extent of calibration-sets should be restricted to ca. 1000 km in radius because small-scale calibration-sets (<800 km radius) will likely fail to include enough spatial variation in the modern pollen assemblages to reflect the temporal range shifts during the Holocene, while too broad a scale calibration-set (>1500 km radius) will include taxa with very different pollen-climate relationships. Based on our results we conclude that the optimal calibration-set should 1) cover a reasonably large spatial extent with an even distribution of modern pollen samples; 2) possess good model performance as indicated by cross-validation, high analogue quality, and excellent fit with the target fossil pollen spectra; 3) possess high taxonomic resolution, and 4) obey the modern and past distribution ranges of taxa inferred from palaeo-genetic and macrofossil studies.
Detection of Tetracycline in Milk using NIR Spectroscopy and Partial Least Squares
NASA Astrophysics Data System (ADS)
Wu, Nan; Xu, Chenshan; Yang, Renjie; Ji, Xinning; Liu, Xinyuan; Yang, Fan; Zeng, Ming
2018-02-01
The feasibility of measuring tetracycline in milk was investigated by near infrared (NIR) spectroscopic technique combined with partial least squares (PLS) method. The NIR transmittance spectra of 40 pure milk samples and 40 tetracycline adulterated milk samples with different concentrations (from 0.005 to 40 mg/L) were obtained. The pure milk and tetracycline adulterated milk samples were properly assigned to the categories with 100% accuracy in the calibration set, and the rate of correct classification of 96.3% was obtained in the prediction set. For the quantitation of tetracycline in adulterated milk, the root mean squares errors for calibration and prediction models were 0.61 mg/L and 4.22 mg/L, respectively. The PLS model had good fitting effect in calibration set, however its predictive ability was limited, especially for low tetracycline concentration samples. Totally, this approach can be considered as a promising tool for discrimination of tetracycline adulterated milk, as a supplement to high performance liquid chromatography.
7 CFR 28.956 - Prescribed fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...
7 CFR 28.956 - Prescribed fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...
Method for in-situ calibration of electrophoretic analysis systems
Liu, Changsheng; Zhao, Hequan
2005-05-08
An electrophoretic system having a plurality of separation lanes is provided with an automatic calibration feature in which each lane is separately calibrated. For each lane, the calibration coefficients map a spectrum of received channel intensities onto values reflective of the relative likelihood of each of a plurality of dyes being present. Individual peaks, reflective of the influence of a single dye, are isolated from among the various sets of detected light intensity spectra, and these can be used to both detect the number of dye components present, and also to establish exemplary vectors for the calibration coefficients which may then be clustered and further processed to arrive at a calibration matrix for the system. The system of the present invention thus permits one to use different dye sets to tag DNA nucleotides in samples which migrate in separate lanes, and also allows for in-situ calibration with new, previously unused dye sets.
NASA Astrophysics Data System (ADS)
Suhandy, D.; Yulia, M.; Ogawa, Y.; Kondo, N.
2018-05-01
In the present research, an evaluation of using near infrared (NIR) spectroscopy in tandem with full spectrum partial least squares (FS-PLS) regression for quantification of degree of adulteration in civet coffee was conducted. A number of 126 ground roasted coffee samples with degree of adulteration 0-51% were prepared. Spectral data were acquired using a NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement in the range of 1300-2500 nm. The samples were divided into two groups calibration sample set (84 samples) and prediction sample set (42 samples). The calibration model was developed on original spectra using FS-PLS regression with full-cross validation method. The calibration model exhibited the determination coefficient R2=0.96 for calibration and R2=0.92 for validation. The prediction resulted in low root mean square error of prediction (RMSEP) (4.67%) and high ratio prediction to deviation (RPD) (3.75). In conclusion, the degree of adulteration in civet coffee have been quantified successfully by using NIR spectroscopy and FS-PLS regression in a non-destructive, economical, precise, and highly sensitive method, which uses very simple sample preparation.
Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek
2006-01-11
Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.
Measurement of pH in whole blood by near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries
1999-03-01
Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Esquinas, Pedro L; Tanguay, Jesse; Gonzalez, Marjorie; Vuckovic, Milan; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2016-12-01
In the nuclear medicine department, the activity of radiopharmaceuticals is measured using dose calibrators (DCs) prior to patient injection. The DC consists of an ionization chamber that measures current generated by ionizing radiation (emitted from the radiotracer). In order to obtain an activity reading, the current is converted into units of activity by applying an appropriate calibration factor (also referred to as DC dial setting). Accurate determination of DC dial settings is crucial to ensure that patients receive the appropriate dose in diagnostic scans or radionuclide therapies. The goals of this study were (1) to describe a practical method to experimentally determine dose calibrator settings using a thyroid-probe (TP) and (2) to investigate the accuracy, reproducibility, and uncertainties of the method. As an illustration, the TP method was applied to determine 188 Re dial settings for two dose calibrator models: Atomlab 100plus and Capintec CRC-55tR. Using the TP to determine dose calibrator settings involved three measurements. First, the energy-dependent efficiency of the TP was determined from energy spectra measurements of two calibration sources ( 152 Eu and 22 Na). Second, the gamma emissions from the investigated isotope ( 188 Re) were measured using the TP and its activity was determined using γ-ray spectroscopy methods. Ambient background, scatter, and source-geometry corrections were applied during the efficiency and activity determination steps. Third, the TP-based 188 Re activity was used to determine the dose calibrator settings following the calibration curve method [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)]. The interobserver reproducibility of TP measurements was determined by the coefficient of variation (COV) and uncertainties associated to each step of the measuring process were estimated. The accuracy of activity measurements using the proposed method was evaluated by comparing the TP activity estimates of 99m Tc, 188 Re, 131 I, and 57 Co samples to high purity Ge (HPGe) γ-ray spectroscopy measurements. The experimental 188 Re dial settings determined with the TP were 76.5 ± 4.8 and 646 ± 43 for Atomlab 100plus and Capintec CRC-55tR, respectively. In the case of Atomlab 100plus, the TP-based dial settings improved the accuracy of 188 Re activity measurements (confirmed by HPGe measurements) as compared to manufacturer-recommended settings. For Capintec CRC-55tR, the TP-based settings were in agreement with previous results [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)] which demonstrated that manufacturer-recommended settings overestimate 188 Re activity by more than 20%. The largest source of uncertainty in the experimentally determined dial settings was due to the application of a geometry correction factor, followed by the uncertainty of the scatter-corrected photopeak counts and the uncertainty of the TP efficiency calibration experiment. When using the most intense photopeak of the sample's emissions, the TP method yielded accurate (within 5% errors) and reproducible (COV = 2%) measurements of sample's activity. The relative uncertainties associated with such measurements ranged from 6% to 8% (expanded uncertainty at 95% confidence interval, k = 2). Accurate determination/verification of dose calibrator dial settings can be performed using a thyroid-probe in the nuclear medicine department.
Simultaneous determination of specific alpha and beta emitters by LSC-PLS in water samples.
Fons-Castells, J; Tent-Petrus, J; Llauradó, M
2017-01-01
Liquid scintillation counting (LSC) is a commonly used technique for the determination of alpha and beta emitters. However, LSC has poor resolution and the continuous spectra for beta emitters hinder the simultaneous determination of several alpha and beta emitters from the same spectrum. In this paper, the feasibility of multivariate calibration by partial least squares (PLS) models for the determination of several alpha ( nat U, 241 Am and 226 Ra) and beta emitters ( 40 K, 60 Co, 90 Sr/ 90 Y, 134 Cs and 137 Cs) in water samples is reported. A set of alpha and beta spectra from radionuclide calibration standards were used to construct three PLS models. Experimentally mixed radionuclides and intercomparision materials were used to validate the models. The results had a maximum relative bias of 25% when all the radionuclides in the sample were included in the calibration set; otherwise the relative bias was over 100% for some radionuclides. The results obtained show that LSC-PLS is a useful approach for the simultaneous determination of alpha and beta emitters in multi-radionuclide samples. However, to obtain useful results, it is important to include all the radionuclides expected in the studied scenario in the calibration set. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kramer, Kirsten E; Small, Gary W
2009-02-01
Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
Soldado, A; Fearn, T; Martínez-Fernández, A; de la Roza-Delgado, B
2013-02-15
As a first step in a project whose aim is to implement near infrared (NIR) analysis of animal feed on the farm, the present work has examined the possibility of transferring undried grass silage calibrations for dry matter, crude protein, and neutral detergent fiber from a dispersive laboratory NIR instrument (Foss NIRSystem 6500) to a diode array on-site NIR instrument (Zeiss Corona 45 visNIR 1.7). Because the samples are complex and heterogeneous and have high humidity levels it is not easy to establish good calibrations, and it is even more of a challenge to transfer them. By cutting the spectral range to 1100-1650 nm and treating with first or second derivative followed by standard normal variate (SNV) scatter correction, it was possible to obtain very similar spectra from the two instruments. To make the transfer, two approaches were tried. Simply correcting the Corona spectra by subtracting the mean difference spectrum from a transfer set met with only limited success. Making a calibration on the Foss using a calibration set of 503 samples with spectra orthogonalized to the all the difference spectra in the transfer set of 10 samples resulted in a successful transfer for all three calibrations, as judged by performance on two prediction sets of size 22 and 29. Measuring 5 replicate subsamples with the Corona allows it to see a similar surface area to that of 3 replicates in the Foss transport cell, and it is suggested that this is an appropriate level of replication for the Corona. Copyright © 2012 Elsevier B.V. All rights reserved.
Metallicity calibrations for dwarf stars and giants in the Geneva photometric system
NASA Astrophysics Data System (ADS)
Netopil, Martin
2017-08-01
We use the most homogeneous Geneva seven-colour photometric system to derive new metallicity calibrations for early A- to K-type stars that cover both, dwarf stars and giants. The calibrations are based on several spectroscopic data sets that were merged to a common scale, and we applied them to open cluster data to obtain an additional proof of the metallicity scale and accuracy. In total, metallicities of 54 open clusters are presented. The accuracy of the calibrations for single stars is in general below 0.1 dex, but for the open cluster sample with mean values based on several stars we find a much better precision, a scatter as low as about 0.03 dex. Furthermore, we combine the new results with another comprehensive photometric data set to present a catalogue of mean metallicities for more than 3000 F- and G-type dwarf stars with σ ˜ 0.06 dex. The list was extended by more than 1200 hotter stars up to about 8500 K (or spectral type A3) by taking advantage of their almost reddening free characteristic in the new Geneva metallicity calibrations. These two large samples are well suited as primary or secondary calibrators of other data, and we already identified about 20 spectroscopic data sets that show offsets up to about 0.4 dex.
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
2016 NIST (133Xe) and Transfer (131mXe, 133mXe, 135Xe) Calibration Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Troy A.
A significantly improved calibration of the High Purity Germanium detectors used by the Idaho National Laboratory Noble Gas Laboratory was performed during the annual NIST calibration. New sample spacers provide reproducible and secure support of samples at distances of 4, 12, 24, 50 and 100 cm. Bean, 15mL and 50mL Schlenk tube geometries were calibrated. Also included in this year’s calibration was a correlation of detector dead-time with sample activity that can be used to predict the schedule of counting the samples at each distance for each geometry. This schedule prediction will help staff members set calendar reminders so thatmore » collection of calibration data at each geometry will not be missed. This report also correlates the counting efficiencies between detectors, so that if the counting efficiency on one detector is not known, it can be estimated from the same geometry on another detector.« less
ASTM clustering for improving coal analysis by near-infrared spectroscopy.
Andrés, J M; Bona, M T
2006-11-15
Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysis improved the classification of the samples but not enough to be satisfactory for every group considered.
GEMAS: Colours of dry and moist agricultural soil samples of Europe
NASA Astrophysics Data System (ADS)
Klug, Martin; Fabian, Karl; Reimann, Clemens
2016-04-01
High resolution HDR colour images of all Ap samples from the GEMAS survey were acquired using a GeoTek Linescan camera. Three measurements of dry and wet samples with increasing exposure time and increasing illumination settings produced a set of colour images at 50μm resolution. Automated image processing was used to calibrate the six images per sample with respect to the synchronously measured X-Rite colorchecker chart. The calibrated images were then fit to Munsell soil colours that were measured in the same way. The results provide overview maps of dry and moist European soil colours. Because colour is closely linked to iron mineralogy, carbonate, silicate and organic carbon content the results can be correlated to magnetic, mineralogical, and geochemical properties. In combination with the full GEMAS chemical and physical measurements, this yields a valuable data set for calibration and interpretation of visible satellite colour data with respect to chemical composition and geological background, soil moisture, and soil degradation. This data set will help to develop new methods for world-wide characterization and monitoring of agricultural soils which is essential for quantifying geologic and human impact on the critical zone environment. It furthermore enables the scientific community and governmental authorities to monitor consequences of climatic change, to plan and administrate economic and ecological land use, and to use the data set for forensic applications.
2017-09-01
ADCP locations used for model calibration. ......................................................................... 12 Figure 4-3. Sample water...Example of fine sediment sample [Set d, Sample B30]. (B) Example of coarse sediment sample [Set d, sample B05...Turning Basin average sediment size distribution curve. ................................................... 21 Figure 5-5. Turning Basin average size
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Quantitative LIBS analysis of vanadium in samples of hexagonal mesoporous silica catalysts.
Pouzar, Miloslav; Kratochvíl, Tomás; Capek, Libor; Smoláková, Lucie; Cernohorský, Tomás; Krejcová, Anna; Hromádko, Ludek
2011-02-15
The method for the analysis of vanadium in hexagonal mesoporous silica (V-HMS) catalysts using Laser Induced Breakdown Spectrometry (LIBS) was suggested. Commercially available LIBS spectrometer was calibrated with the aid of authentic V-HMS samples previously analyzed by ICP OES after microwave digestion. Deposition of the sample on the surface of adhesive tape was adopted as a sample preparation method. Strong matrix effect connected with the catalyst preparation technique (1st vanadium added in the process of HMS synthesis, 2nd already synthesised silica matrix was impregnated by vanadium) was observed. The concentration range of V in the set of nine calibration standards was 1.3-4.5% (w/w). Limit of detection was 0.13% (w/w) and it was calculated as a triple standard deviation from five replicated determinations of vanadium in the real sample with a very low vanadium concentration. Comparable results of LIBS and ED XRF were obtained if the same set of standards was used for calibration of both methods and vanadium was measured in the same type of real samples. LIBS calibration constructed using V-HMS-impregnated samples failed for measuring of V-HMS-synthesized samples. LIBS measurements seem to be strongly influenced with different chemical forms of vanadium in impregnated and synthesised samples. The combination of LIBS and ED XRF is able to provide new information about measured samples (in our case for example about procedure of catalyst preparation). Copyright © 2010 Elsevier B.V. All rights reserved.
Murillo Pulgarín, J A; Alañón Molina, A; Boras, N
2013-03-20
A new method for the simultaneous determination of danofloxacin and flumequine in milk samples was developed by using the nonlinear variable-angle synchronous fluorescence technique to acquire data and a partial least-squares chemometric algorithm to process them. A calibration set of standard samples was designed by combination of a factorial design with two levels per factor and a central star design. Whey was used as the third component of the calibration matrix. In order to assess the goodness of the proposed method, a prediction set of 11 synthetic samples was analyzed, obtaining recovery percentages between 96.1% and 104.0%. Limits of detection, calculated by means of a new criterion, were 0.90 and 12.4 ng mL(-1) for danofloxacin and flumequine, respectively. Finally, the simultaneous determination of both fluoroquinoles in milk samples containing the analytes was successfully carried out, obtaining an average recovery percentage of 99.3 ± 4.4 for danofloxacin and 100.7 ± 4.4.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
Lanvers-Kaminsky, Claudia; Rüffer, Andrea; Würthwein, Gudrun; Gerss, Joachim; Zucchetti, Massimo; Ballerini, Andrea; Attarbaschi, Andishe; Smisek, Petr; Nath, Christa; Lee, Samiuela; Elitzur, Sara; Zimmermann, Martin; Möricke, Anja; Schrappe, Martin; Rizzari, Carmelo; Boos, Joachim
2018-02-01
In the international AIEOP-BFM ALL 2009 trial, asparaginase (ASE) activity was monitored after each dose of pegylated Escherichia coli ASE (PEG-ASE). Two methods were used: the aspartic acid β-hydroxamate (AHA) test and medac asparaginase activity test (MAAT). As the latter method overestimates PEG-ASE activity because it calibrates using E. coli ASE, method comparison was performed using samples from the AIEOP-BFM ALL 2009 trial. PEG-ASE activities were determined using MAAT and AHA test in 2 sets of samples (first set: 630 samples and second set: 91 samples). Bland-Altman analysis was performed on ratios between MAAT and AHA tests. The mean difference between both methods, limits of agreement, and 95% confidence intervals were calculated and compared for all samples and samples grouped according to the calibration ranges of the MAAT and the AHA test. PEG-ASE activity determined using the MAAT was significantly higher than when determined using the AHA test (P < 0.001; Wilcoxon signed-rank test). Within the calibration range of the MAAT (30-600 U/L), PEG-ASE activities determined using the MAAT were on average 23% higher than PEG-ASE activities determined using the AHA test. This complies with the mean difference reported in the MAAT manual. With PEG-ASE activities >600 U/L, the discrepancies between MAAT and AHA test increased. Above the calibration range of the MAAT (>600 U/L) and the AHA test (>1000 U/L), a mean difference of 42% was determined. Because more than 70% of samples had PEG-ASE activities >600 U/L and required additional sample dilution, an overall mean difference of 37% was calculated for all samples (37% for the first and 34% for the second set). Comparison of the MAAT and AHA test for PEG-ASE activity confirmed a mean difference of 23% between MAAT and AHA test for PEG-ASE activities between 30 and 600 U/L. The discrepancy increased in samples with >600 U/L PEG-ASE activity, which will be especially relevant when evaluating high PEG-ASE activities in relation to toxicity, efficacy, and population pharmacokinetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Spinning angle optical calibration apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, S.K.; Pratt, H.R.
1991-02-26
This patent describes an optical calibration apparatus provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning magic angles in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the magic angle of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to amore » graduation or graduations on a reticle in the magnifying scope is noted.« less
Bocquet, S.; Saro, A.; Mohr, J. J.; ...
2015-01-30
Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less
NASA Astrophysics Data System (ADS)
Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).
Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel
2013-03-15
An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Liu, Fei; He, Yong; Wang, Li
2007-11-01
In order to implement the fast discrimination of different milk tea powders with different internal qualities, visible and near infrared (Vis/NIR) spectroscopy combined with effective wavelengths (EWs) and BP neural network (BPNN) was investigated as a new approach. Five brands of milk teas were obtained and 225 samples were selected randomly for the calibration set, while 75 samples for the validation set. The EWs were selected according to x-loading weights and regression coefficients by PLS analysis after some preprocessing. A total of 18 EWs (400, 401, 452, 453, 502, 503, 534, 535, 594, 595, 635, 636, 688, 689, 987, 988, 995 and 996 nm) were selected as the inputs of BPNN model. The performance was validated by the calibration and validation sets. The threshold error of prediction was set as +/-0.1 and an excellent precision and recognition ratio of 100% for calibration set and 98.7% for validation set were achieved. The prediction results indicated that the EWs reflected the main characteristics of milk tea of different brands based on Vis/NIR spectroscopy and BPNN model, and the EWs would be useful for the development of portable instrument to discriminate the variety and detect the adulteration of instant milk tea powders.
Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method
NASA Astrophysics Data System (ADS)
Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.
2017-11-01
Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.
Reeves, J. B.; Smith, D.B.
2009-01-01
In 2004, soils were collected at 220 sites along two transects across the USA and Canada as a pilot study for a planned soil geochemical survey of North America (North American Soil Geochemical Landscapes Project). The objective of the current study was to examine the potential of diffuse reflectance (DR) Fourier Transform (FT) mid-infrared (mid-IR) and near-infrared (NIRS) spectroscopy to reduce the need for conventional analysis for the determination of major and trace elements in such continental-scale surveys. Soil samples (n = 720) were collected from two transects (east-west across the USA, and north-south from Manitoba, Canada to El Paso, Texas (USA), n = 453 and 267, respectively). The samples came from 19 USA states and the province of Manitoba in Canada. They represented 31 types of land use (e.g., national forest, rangeland, etc.), and 123 different land covers (e.g., soybeans, oak forest, etc.). The samples represented a combination of depth-based sampling (0-5 cm) and horizon-based sampling (O, A and C horizons) with 123 different depths identified. The set was very diverse with few samples similar in land use, land cover, etc. All samples were analyzed by conventional means for the near-total concentration of 49 analytes (Ctotal, Ccarbonate and Corganic, and 46 major and trace elements). Spectra were obtained using dried, ground samples using a Digilab FTS-7000 FT spectrometer in the mid- (4000-400 cm-1) and near-infrared (10,000-4000 cm-1) at 4 cm-1 resolution (64 co-added scans per spectrum) using a Pike AutoDIFF DR autosampler. Partial least squares calibrations were develop using: (1) all samples as a calibration set; (2) samples evenly divided into calibration and validation sets based on spectral diversity; and (3) samples divided to have matching analyte concentrations in calibration and validation sets. In general, results supported the conclusion that neither mid-IR nor NIRS would be particularly useful in reducing the need for conventional analysis of soils from this continental-scale geochemical survey. The extreme sample diversity, likely caused by the widely varied parent material, land use at the site of collection (e.g., grazing, recreation, agriculture, etc.), and climate resulted in poor calibrations even for Ctotal, Corganic and Ccarbonate. The results indicated potential for mid-IR and NIRS to differentiate soils containing high concentrations (>100 mg/kg) of some metals (e.g., Co, Cr, Ni) from low-level samples (<50 mg/kg). However, because of the small number of high-level samples, it is possible that differentiation was based on factors other than metal concentration. Results for Mg and Sr were good, but results for other metals examined were fair to poor, at best. In essence, it appears that the great variation in chemical and physical properties seen in soils from this continental-scale survey resulted in each sample being virtually unique. Thus, suitable spectroscopic calibrations were generally not possible.
Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan
2014-10-01
Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton
2015-09-01
Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Wesolowski, Edwin A.
1999-01-01
A streamflow and water-quality model was developed for reaches of Sand and Caddo Creeks in south-central Oklahoma to simulate the effects of wastewater discharge from a refinery and a municipal treatment plant.The purpose of the model was to simulate conditions during low streamflow when the conditions controlling dissolved-oxygen concentrations are most severe. Data collected to calibrate and verify the streamflow and water-quality model include continuously monitored streamflow and water-quality data at two gaging stations and three temporary monitoring stations; wastewater discharge from two wastewater plants; two sets each of five water-quality samples at nine sites during a 24-hour period; dye and propane samples; periphyton samples; and sediment oxygen demand measurements. The water-quality sampling, at a 6-hour frequency, was based on a Lagrangian reference frame in which the same volume of water was sampled at each site. To represent the unsteady streamflows and the dynamic water-quality conditions, a transport modeling system was used that included both a model to route streamflow and a model to transport dissolved conservative constituents with linkage to reaction kinetics similar to the U.S. Environmental Protection Agency QUAL2E model to simulate nonconservative constituents. These model codes are the Diffusion Analogy Streamflow Routing Model (DAFLOW) and the branched Lagrangian transport model (BLTM) and BLTM/QUAL2E that, collectively, as calibrated models, are referred to as the Ardmore Water-Quality Model.The Ardmore DAFLOW model was calibrated with three sets of streamflows that collectively ranged from 16 to 3,456 cubic feet per second. The model uses only one set of calibrated coefficients and exponents to simulate streamflow over this range. The Ardmore BLTM was calibrated for transport by simulating dye concentrations collected during a tracer study when streamflows ranged from 16 to 23 cubic feet per second. Therefore, the model is expected to be most useful for low streamflow simulations. The Ardmore BLTM/QUAL2E model was calibrated and verified with water-quality data from nine sites where two sets of five samples were collected. The streamflow during the water-quality sampling in Caddo Creek at site 7 ranged from 8.4 to 20 cubic feet per second, of which about 5.0 to 9.7 cubic feet per second was contributed by Sand Creek. The model simulates the fate and transport of 10 water-quality constituents. The model was verified by running it using data that were not used in calibration; only phytoplankton were not verified.Measured and simulated concentrations of dissolved oxygen exhibited a marked daily pattern that was attributable to waste loading and algal activity. Dissolved-oxygen measurements during this study and simulated dissolved-oxygen concentrations using the Ardmore Water-Quality Model, for the conditions of this study, illustrate that the dissolved-oxygen sag curve caused by the upstream wastewater discharges is confined to Sand Creek.
Li, Weiyong; Worosila, Gregory D
2005-05-13
This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.
40 CFR 86.1321-94 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... to be used for the analysis of natural gas-fueled vehicle hydrocarbon samples, the methane response... following initial and periodic calibration. The HFID used with petroleum-fueled, natural gas-fueled and liquefied petroleum gas-fueled diesel engines shall be operated to a set point ±10 °F (±5.5 °C) between 365...
40 CFR 86.1321-94 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to be used for the analysis of natural gas-fueled vehicle hydrocarbon samples, the methane response... following initial and periodic calibration. The HFID used with petroleum-fueled, natural gas-fueled and liquefied petroleum gas-fueled diesel engines shall be operated to a set point ±10 °F (±5.5 °C) between 365...
40 CFR 89.313 - Initial calibration of analyzers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 89.313 - Initial calibration of analyzers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 89.313 - Initial calibration of analyzers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 89.313 - Initial calibration of analyzers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 89.313 - Initial calibration of analyzers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
NASA Astrophysics Data System (ADS)
Harris, C. D.; Profeta, Luisa T. M.; Akpovo, Codjo A.; Johnson, Lewis; Stowe, Ashley C.
2017-05-01
A calibration model was created to illustrate the detection capabilities of laser ablation molecular isotopic spectroscopy (LAMIS) discrimination in isotopic analysis. The sample set contained boric acid pellets that varied in isotopic concentrations of 10B and 11B. Each sample set was interrogated with a Q-switched Nd:YAG ablation laser operating at 532 nm. A minimum of four band heads of the β system B2∑ -> Χ2∑transitions were identified and verified with previous literature on BO molecular emission lines. Isotopic shifts were observed in the spectra for each transition and used as the predictors in the calibration model. The spectra along with their respective 10/11B isotopic ratios were analyzed using Partial Least Squares Regression (PLSR). An IUPAC novel approach for determining a multivariate Limit of Detection (LOD) interval was used to predict the detection of the desired isotopic ratios. The predicted multivariate LOD is dependent on the variation of the instrumental signal and other composites in the calibration model space.
[Discrimination of donkey meat by NIR and chemometrics].
Niu, Xiao-Ying; Shao, Li-Min; Dong, Fang; Zhao, Zhi-Lei; Zhu, Yan
2014-10-01
Donkey meat samples (n = 167) from different parts of donkey body (neck, costalia, rump, and tendon), beef (n = 47), pork (n = 51) and mutton (n = 32) samples were used to establish near-infrared reflectance spectroscopy (NIR) classification models in the spectra range of 4,000~12,500 cm(-1). The accuracies of classification models constructed by Mahalanobis distances analysis, soft independent modeling of class analogy (SIMCA) and least squares-support vector machine (LS-SVM), respectively combined with pretreatment of Savitzky-Golay smooth (5, 15 and 25 points) and derivative (first and second), multiplicative scatter correction and standard normal variate, were compared. The optimal models for intact samples were obtained by Mahalanobis distances analysis with the first 11 principal components (PCs) from original spectra as inputs and by LS-SVM with the first 6 PCs as inputs, and correctly classified 100% of calibration set and 98. 96% of prediction set. For minced samples of 7 mm diameter the optimal result was attained by LS-SVM with the first 5 PCs from original spectra as inputs, which gained an accuracy of 100% for calibration and 97.53% for prediction. For minced diameter of 5 mm SIMCA model with the first 8 PCs from original spectra as inputs correctly classified 100% of calibration and prediction. And for minced diameter of 3 mm Mahalanobis distances analysis and SIMCA models both achieved 100% accuracy for calibration and prediction respectively with the first 7 and 9 PCs from original spectra as inputs. And in these models, donkey meat samples were all correctly classified with 100% either in calibration or prediction. The results show that it is feasible that NIR with chemometrics methods is used to discriminate donkey meat from the else meat.
NASA Astrophysics Data System (ADS)
Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel
2017-06-01
Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.
Spinning angle optical calibration apparatus
Beer, Stephen K.; Pratt, II, Harold R.
1991-01-01
An optical calibration apparatus is provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning "magic angles" in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the "magic angle" of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to a graduation or graduations on a reticle in the magnifying scope is noted. Thereafter, the spinning "magic angle" of a test material having similar nuclear properties to the standard is attained by returning the sample holder back to the originally noted coordinate position.
NASA Astrophysics Data System (ADS)
de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira
2014-11-01
Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocquet, S.; Saro, A.; Mohr, J. J.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg{sup 2} of the survey along with 63 velocity dispersion (σ {sub v}) and 16 X-ray Y {sub X} measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ {sub v} and Y {sub X} are consistent at the 0.6σ level, with the σ {sub v} calibration preferring ∼16% highermore » masses. We use the full SPT{sub CL} data set (SZ clusters+σ {sub v}+Y {sub X}) to measure σ{sub 8}(Ω{sub m}/0.27){sup 0.3} = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m {sub ν} = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m {sub ν} further reconciles the results. When we combine the SPT{sub CL} and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y {sub X} calibration and 0.8σ higher than the σ {sub v} calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω{sub m} = 0.299 ± 0.009 and σ{sub 8} = 0.829 ± 0.011. Within a νCDM model we find ∑m {sub ν} = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1)« less
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Duchêne, David; Duchêne, Sebastian; Ho, Simon Y W
2015-07-01
Phylogenetic estimation of evolutionary timescales has become routine in biology, forming the basis of a wide range of evolutionary and ecological studies. However, there are various sources of bias that can affect these estimates. We investigated whether tree imbalance, a property that is commonly observed in phylogenetic trees, can lead to reduced accuracy or precision of phylogenetic timescale estimates. We analysed simulated data sets with calibrations at internal nodes and at the tips, taking into consideration different calibration schemes and levels of tree imbalance. We also investigated the effect of tree imbalance on two empirical data sets: mitogenomes from primates and serial samples of the African swine fever virus. In analyses calibrated using dated, heterochronous tips, we found that tree imbalance had a detrimental impact on precision and produced a bias in which the overall timescale was underestimated. A pronounced effect was observed in analyses with shallow calibrations. The greatest decreases in accuracy usually occurred in the age estimates for medium and deep nodes of the tree. In contrast, analyses calibrated at internal nodes did not display a reduction in estimation accuracy or precision due to tree imbalance. Our results suggest that molecular-clock analyses can be improved by increasing taxon sampling, with the specific aims of including deeper calibrations, breaking up long branches and reducing tree imbalance. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-10-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-06-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Fine PM measurements: personal and indoor air monitoring.
Jantunen, M; Hänninen, O; Koistinen, K; Hashim, J H
2002-12-01
This review compiles personal and indoor microenvironment particulate matter (PM) monitoring needs from recently set research objectives, most importantly the NRC published "Research Priorities for Airborne Particulate Matter (1998)". Techniques and equipment used to monitor PM personal exposures and microenvironment concentrations and the constituents of the sampled PM during the last 20 years are then reviewed. Development objectives are set and discussed for personal and microenvironment PM samplers and monitors, for filter materials, and analytical laboratory techniques for equipment calibration, filter weighing and laboratory climate control. The progress is leading towards smaller sample flows, lighter, silent, independent (battery powered) monitors with data logging capacity to store microenvironment or activity relevant sensor data, advanced flow controls and continuous recording of the concentration. The best filters are non-hygroscopic, chemically pure and inert, and physically robust against mechanical wear. Semiautomatic and primary standard equivalent positive displacement flow meters are replacing the less accurate methods in flow calibration, and also personal sampling flow rates should become mass flow controlled (with or without volumetric compensation for pressure and temperature changes). In the weighing laboratory the alternatives are climatic control (set temperature and relative humidity), and mechanically simpler thermostatic heating, air conditioning and dehumidification systems combined with numerical control of temperature, humidity and pressure effects on flow calibration and filter weighing.
Lacour, C; Joannis, C; Chebbo, G
2009-05-01
This article presents a methodology for assessing annual wet weather Suspended Solids (SS) and Chemical Oxygen Demand (COD) loads in combined sewers, along with the associated uncertainties from continuous turbidity measurements. The proposed method is applied to data from various urban catchments in the cities of Paris and Nantes. The focus here concerns the impact of the number of rain events sampled for calibration (i.e. through establishing linear SS/turbidity or COD/turbidity relationships) on the uncertainty of annual pollutant load assessments. Two calculation methods are investigated, both of which rely on Monte Carlo simulations: random assignment of event-specific calibration relationships to each individual rain event, and the use of an overall relationship built from the entire available data set. Since results indicate a fairly low inter-event variability for calibration relationship parameters, an accurate assessment of pollutant loads can be derived, even when fewer than 10 events are sampled for calibration purposes. For operational applications, these results suggest that turbidity could provide a more precise evaluation of pollutant loads at lower cost than typical sampling methods.
Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.
García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L
2002-01-30
NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.
Bakri, Barbara; Weimer, Marco; Hauck, Gerrit; Reich, Gabriele
2015-11-01
Scope of the study was (1) to develop a lean quantitative calibration for real-time near-infrared (NIR) blend monitoring, which meets the requirements in early development of pharmaceutical products and (2) to compare the prediction performance of this approach with the results obtained from stratified sampling using a sample thief in combination with off-line high pressure liquid chromatography (HPLC) and at-line near-infrared chemical imaging (NIRCI). Tablets were manufactured from powder blends and analyzed with NIRCI and HPLC to verify the real-time results. The model formulation contained 25% w/w naproxen as a cohesive active pharmaceutical ingredient (API), microcrystalline cellulose and croscarmellose sodium as cohesive excipients and free-flowing mannitol. Five in-line NIR calibration approaches, all using the spectra from the end of the blending process as reference for PLS modeling, were compared in terms of selectivity, precision, prediction accuracy and robustness. High selectivity could be achieved with a "reduced" approach i.e. API and time saving approach (35% reduction of API amount) based on six concentration levels of the API with three levels realized by three independent powder blends and the additional levels obtained by simply increasing the API concentration in these blends. Accuracy and robustness were further improved by combining this calibration set with a second independent data set comprising different excipient concentrations and reflecting different environmental conditions. The combined calibration model was used to monitor the blending process of independent batches. For this model formulation the target concentration of the API could be achieved within 3 min indicating a short blending time. The in-line NIR approach was verified by stratified sampling HPLC and NIRCI results. All three methods revealed comparable results regarding blend end point determination. Differences in both mean API concentration and RSD values could be attributed to differences in effective sample size and thief sampling errors. This conclusion was supported by HPLC and NIRCI analysis of tablets manufactured from powder blends after different blending times. In summary, the study clearly demonstrates the ability to develop efficient and robust quantitative calibrations for real-time NIR powder blend monitoring with a reduced set of powder blends while avoiding any bias caused by physical sampling. Copyright © 2015 Elsevier B.V. All rights reserved.
Otsuka, Eri; Abe, Hiroyuki; Aburada, Masaki; Otsuka, Makoto
2010-07-01
A suppository dosage form has a rapid effect on therapeutics, because it dissolves in the rectum, is absorbed in the bloodstream, and passes the hepatic metabolism. However, the dosage form is unstable, because a suppository is made in a semisolid form, and so it is not easy to mix the bulk drug powder in the base. This article describes a nondestructive method of determining the drug content of suppositories using near-infrared spectrometry (NIR) combined with chemometrics. Suppositories (aspirin content: 1.8, 2.7, 4.5, 7.3, and 9.1%, w/w) were produced by mixing an aspirin bulk powder with hard fat at 50 degrees C and pouring the melt mixture into a plastic mold (2.25 mL). NIR spectra of 12 calibration and 12 validation sample sets were recorded 5 times. A total of 60 spectral data were used as a calibration set to establish a calibration model to predict drug content with a partial least-squares (PLS) regression analysis. NIR data of the suppository samples were divided into two wave number ranges, 4000-12500 cm(-1) (LR), and 5900-6300 cm(-1) (SR). Calibration models for the aspirin content of the suppositories were calculated based on LR and SR ranges of second-derivative NIR spectra using PLS. The models for LR and SR consisted of five and one principal components (PC), respectively. The plots of predicted values against actual values gave a straight line with regression coefficient constants of 0.9531 and 0.9749, respectively. The mean bias and mean accuracy of the calibration models were calculated based on the SR of variation data sets, and were lower than those of LR, respectively. Limiting the wave number of spectral data sets is useful to help understand the calibration model because of noise cancellation and to measure objective functions.
Spinning angle optical calibration apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, S.K.; Pratt, H.R. II.
1989-09-12
An optical calibration apparatus is provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting and accurate reproducing of spinning magic angles in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the magic angle of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to a graduation ormore » graduations on a reticle in the magnifying scope is noted. Thereafter, the spinning magic angle of a test material having similar nuclear properties to the standard is attained by returning the sample holder back to the originally noted coordinate position. 2 figs.« less
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2017-09-01
In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.
Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing
2017-11-22
Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Calibrated Noise Measurements with Induced Receiver Gain Fluctuations
NASA Technical Reports Server (NTRS)
Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly
2011-01-01
The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.
A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials
NASA Technical Reports Server (NTRS)
Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.
1994-01-01
Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Zamora, D; Torres, A
2014-01-01
Reliable estimations of the evolution of water quality parameters by using in situ technologies make it possible to follow the operation of a wastewater treatment plant (WWTP), as well as improving the understanding and control of the operation, especially in the detection of disturbances. However, ultraviolet (UV)-Vis sensors have to be calibrated by means of a local fingerprint laboratory reference concentration-value data-set. The detection of outliers in these data-sets is therefore important. This paper presents a method for detecting outliers in UV-Vis absorbances coupled to water quality reference laboratory concentrations for samples used for calibration purposes. Application to samples from the influent of the San Fernando WWTP (Medellín, Colombia) is shown. After the removal of outliers, improvements in the predictability of the influent concentrations using absorbance spectra were found.
This SOP describes the procedures to set up, calibrate, initiate and terminate air sampling for persistent organic pollutants. This method is used to sample air, indoors and outdoors, at homes and at day care centers over a 48-hr period.
Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul
2015-01-01
Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934
Aleixandre-Tudo, José Luis; Nieuwoudt, Helené; Aleixandre, José Luis; Du Toit, Wessel J
2015-02-04
The validation of ultraviolet-visible (UV-vis) spectroscopy combined with partial least-squares (PLS) regression to quantify red wine tannins is reported. The methylcellulose precipitable (MCP) tannin assay and the bovine serum albumin (BSA) tannin assay were used as reference methods. To take the high variability of wine tannins into account when the calibration models were built, a diverse data set was collected from samples of South African red wines that consisted of 18 different cultivars, from regions spanning the wine grape-growing areas of South Africa with their various sites, climates, and soils, ranging in vintage from 2000 to 2012. A total of 240 wine samples were analyzed, and these were divided into a calibration set (n = 120) and a validation set (n = 120) to evaluate the predictive ability of the models. To test the robustness of the PLS calibration models, the predictive ability of the classifying variables cultivar, vintage year, and experimental versus commercial wines was also tested. In general, the statistics obtained when BSA was used as a reference method were slightly better than those obtained with MCP. Despite this, the MCP tannin assay should also be considered as a valid reference method for developing PLS calibrations. The best calibration statistics for the prediction of new samples were coefficient of correlation (R 2 val) = 0.89, root mean standard error of prediction (RMSEP) = 0.16, and residual predictive deviation (RPD) = 3.49 for MCP and R 2 val = 0.93, RMSEP = 0.08, and RPD = 4.07 for BSA, when only the UV region (260-310 nm) was selected, which also led to a faster analysis time. In addition, a difference in the results obtained when the predictive ability of the classifying variables vintage, cultivar, or commercial versus experimental wines was studied suggests that tannin composition is highly affected by many factors. This study also discusses the correlations in tannin values between the methylcellulose and protein precipitation methods.
Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng
2015-01-01
The non-sugar solids (NSS) content is one of the most important nutrition indicators of Chinese rice wine. This study proposed a rapid method for the measurement of NSS content in Chinese rice wine using near infrared (NIR) spectroscopy. We also systemically studied the efficient spectral variables selection algorithms that have to go through modeling. A new algorithm of synergy interval partial least square with competitive adaptive reweighted sampling (Si-CARS-PLS) was proposed for modeling. The performance of the final model was back-evaluated using root mean square error of calibration (RMSEC) and correlation coefficient (Rc) in calibration set and similarly tested by mean square error of prediction (RMSEP) and correlation coefficient (Rp) in prediction set. The optimum model by Si-CARS-PLS algorithm was achieved when 7 PLS factors and 18 variables were included, and the results were as follows: Rc=0.95 and RMSEC=1.12 in the calibration set, Rp=0.95 and RMSEP=1.22 in the prediction set. In addition, Si-CARS-PLS algorithm showed its superiority when compared with the commonly used algorithms in multivariate calibration. This work demonstrated that NIR spectroscopy technique combined with a suitable multivariate calibration algorithm has a high potential in rapid measurement of NSS content in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M
2015-10-10
The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.
Sharma, H S S; Reinard, N
2004-12-01
Flax fiber must be mechanically prepared to improve fineness and homogeneity of the sliver before chemical processing and wet-spinning. The changes in fiber characteristics are monitored by an airflow method, which is labor intensive and requires 90 minutes to process one sample. This investigation was carried out to develop robust visible and near-infrared calibrations that can be used as a rapid tool for quality assessment of input fibers and changes in fineness at the doubling (blending), first, second, third, and fourth drawing frames, and at the roving stage. The partial least squares (PLS) and principal component regression (PCR) methods were employed to generate models from different segments of the spectra (400-1100, 1100-1700, 1100-2498, 1700-2498, and 400-2498 nm) and a calibration set consisting of 462 samples obtained from the six processing stages. The calibrations were successfully validated with an independent set of 97 samples, and standard errors of prediction of 2.32 and 2.62 dtex were achieved with the best PLS (400-2498 nm) and PCR (1100-2498 nm) models, respectively. An optimized PLS model of the visible-near-infrared (vis-NIR) spectra explained 97% of the variation (R(2) = 0.97) in the sample set with a standard error of calibration (SEC) of 2.45 dtex and a standard error of cross-validation (SECV) of 2.51 dtex R(2) = 0.96). The mean error of the reference airflow method was 1.56 dtex, which is more accurate than the NIR calibration. The improvement in fiber fineness of the validation set obtained from the six production lines was predicted with an error range of -6.47 to +7.19 dtex for input fibers, -1.44 to +5.77 dtex for blended fibers at the doubling, and -4.72 to +3.59 dtex at the drawing frame stages. This level of precision is adequate for wet-spinners to monitor fiber fineness of input fibers and during the preparation of fibers. The advantage of visNIR spectroscopy is the potential capability of the technique to assess fineness and other important quality characteristics of a fiber sample simultaneously in less than 30 minutes; the disadvantages are the expensive instrumentation and the expertise required for operating the instrument compared to the reference method. These factors need to be considered by the industry before installing an off-line NIR system for predicting quality parameters of input materials and changes in fiber characteristics during mechanical processing.
Calibration of the clumped isotope thermometer for planktic foraminifers
NASA Astrophysics Data System (ADS)
Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.
2017-12-01
Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship measured in planktic foraminifers.
NASA Astrophysics Data System (ADS)
Kelson, Julia R.; Huntington, Katharine W.; Schauer, Andrew J.; Saenger, Casey; Lechler, Alex R.
2017-01-01
Carbonate clumped isotope (Δ47) thermometry has been applied to a wide range of problems in earth, ocean and biological sciences over the last decade, but is still plagued by discrepancies among empirical calibrations that show a range of Δ47-temperature sensitivities. The most commonly suggested causes of these discrepancies are the method of mineral precipitation and analytical differences, including the temperature of phosphoric acid used to digest carbonates. However, these mechanisms have yet to be tested in a consistent analytical setting, which makes it difficult to isolate the cause(s) of discrepancies and to evaluate which synthetic calibration is most appropriate for natural samples. Here, we systematically explore the impact of synthetic carbonate precipitation by replicating precipitation experiments of previous workers under a constant analytical setting. We (1) precipitate 56 synthetic carbonates at temperatures of 4-85 °C using different procedures to degas CO2, with and without the use of the enzyme carbonic anhydrase (CA) to promote rapid dissolved inorganic carbon (DIC) equilibration; (2) digest samples in phosphoric acid at both 90 °C and 25 °C; and (3) hold constant all analytical methods including acid preparation, CO2 purification, and mass spectrometry; and (4) reduce our data with 17O corrections that are appropriate for our samples. We find that the CO2 degassing method does not influence Δ47 values of these synthetic carbonates, and therefore probably only influences natural samples with very rapid degassing rates, like speleothems that precipitate out of drip solution with high pCO2. CA in solution does not influence Δ47 values in this work, suggesting that disequilibrium in the DIC pool is negligible. We also find the Δ47 values of samples reacted in 25 and 90 °C acid are within error of each other (once corrected with a constant acid fractionation factor). Taken together, our results show that the Δ47-temperature relationship does not measurably change with either the precipitation methods used in this study or acid digestion temperature. This leaves phosphoric acid preparation, CO2 gas purification, and/or data reduction methods as the possible sources of the discrepancy among published calibrations. In particular, the use of appropriate 17O corrections has the potential to reduce disagreement among calibrations. Our study nearly doubles the available synthetic carbonate calibration data for Δ47 thermometry (adding 56 samples to the 74 previously published samples). This large population size creates a robust calibration that enables us to examine the potential for calibration slope aliasing due to small sample size. The similarity of Δ47 values among carbonates precipitated under such diverse conditions suggests that many natural samples grown at 4-85 °C in moderate pH conditions (6-10) may also be described by our Δ47-temperature relationship.
Burns, J; Hou, S; Riley, C B; Shaw, R A; Jewett, N; McClure, J T
2014-01-01
Rapid, economical, and quantitative assays for measurement of camelid serum immunoglobulin G (IgG) are limited. In camelids, failure of transfer of maternal immunoglobulins has a reported prevalence of up to 20.5%. An accurate method for quantifying serum IgG concentrations is required. To develop an infrared spectroscopy-based assay for measurement of alpaca serum IgG and compare its performance to the reference standard radial immunodiffusion (RID) assay. One hundred and seventy-five privately owned, healthy alpacas. Eighty-two serum samples were collected as convenience samples during routine herd visits whereas 93 samples were recruited from a separate study. Serum IgG concentrations were determined by RID assays and midinfrared spectra were collected for each sample. Fifty samples were set aside as the test set and the remaining 125 training samples were employed to build a calibration model using partial least squares (PLS) regression with Monte Carlo cross validation to determine the optimum number of PLS factors. The predictive performance of the calibration model was evaluated by the test set. Correlation coefficients for the IR-based assay were 0.93 and 0.87, respectively, for the entire data set and test set. Sensitivity in the diagnosis of failure of transfer of passive immunity (FTPI) ([IgG] <1,000 mg/dL) was 71.4% and specificity was 100% for the IR-based method (test set) as gauged relative to the RID reference method assay. This study indicated that infrared spectroscopy, in combination with chemometrics, is an effective method for measurement of IgG in alpaca serum. Copyright © 2014 by the American College of Veterinary Internal Medicine.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
NASA Astrophysics Data System (ADS)
Grunert, Patrick; Rosenthal, Yair; Jorissen, Frans; Holbourn, Ann; Zhou, Xiaoli; Piller, Werner E.
2018-01-01
Costate species of Bulimina are cosmopolitan, infaunal benthic foraminifers which are common in the fossil record since the Paleogene. In the present study, we evaluate the temperature dependency of Mg/Ca ratios in Bulimina inflata, B. mexicana and B. costata from an extensive set of core-top samples from the Atlantic, Pacific and Indian Oceans. The results show no significant offset in Mg/Ca values between costate morphospecies when present in the same sample. The apparent lack of significant inter-specific/inter-morphotype differences amongst the analyzed costate buliminids allows for the combined use of their data-sets for our core-top calibration. Over a bottom-water temperature (BWT) range of 3-13 °C, the Bulimina species show a sensitivity of ∼0.12 mmol/mol/°C which is comparable to that of epifaunal Cibicidoides species and higher than that of the shallow infaunal Uvigerina spp., the most commonly used taxon in Mg/Ca-based palaeotemperature reconstruction. The reliability and accuracy of the new Mg/Ca-temperature calibration is corroborated in the fossil record by a case study in the Timor Sea which demonstrates the presence of southern-sourced waters at intermediate depths for the past 26,000 years. Costate species of Bulimina might thus provide a valuable alternative for BWT reconstruction in mesotrophic to eutrophic settings where many of the commonly used (more oligotrophic) species are rare or absent, and be particularly useful in hypoxic settings such as permanent upwelling zones where costate buliminids often dominate foraminiferal assemblages. The evaluation further reveals a mean positive offset of ∼0.2 mmol/mol of the Atlantic data-set over the Indo-Pacific data-set which contributes to the scatter in our calibration. Although an explanation for this offset is not straightforward and further research is necessary, we hypothesize that different levels of export production and carbonate ion concentrations in pore waters are likely reasons.
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
Luo, Yu; Li, Wen-Long; Huang, Wen-Hua; Liu, Xue-Hua; Song, Yan-Gang; Qu, Hai-Bin
2017-05-01
A near infrared spectroscopy (NIRS) approach was established for quality control of the alcohol precipitation liquid in the manufacture of Codonopsis Radix. By applying NIRS with multivariate analysis, it was possible to build variation into the calibration sample set, and the Plackett-Burman design, Box-Behnken design, and a concentrating-diluting method were used to obtain the sample set covered with sufficient fluctuation of process parameters and extended concentration information. NIR data were calibrated to predict the four quality indicators using partial least squares regression (PLSR). In the four calibration models, the root mean squares errors of prediction (RMSEPs) were 1.22 μg/ml, 10.5 μg/ml, 1.43 μg/ml, and 0.433% for lobetyolin, total flavonoids, pigments, and total solid contents, respectively. The results indicated that multi-components quantification of the alcohol precipitation liquid of Codonopsis Radix could be achieved with an NIRS-based method, which offers a useful tool for real-time release testing (RTRT) of intermediates in the manufacture of Codonopsis Radix.
NASA Astrophysics Data System (ADS)
Caldwell, A.; Cossavella, F.; Majorovits, B.; Palioselitis, D.; Volynets, O.
2015-07-01
A pulse-shape discrimination method based on artificial neural networks was applied to pulses simulated for different background, signal and signal-like interactions inside a germanium detector. The simulated pulses were used to investigate variations of efficiencies as a function of used training set. It is verified that neural networks are well-suited to identify background pulses in true-coaxial high-purity germanium detectors. The systematic uncertainty on the signal recognition efficiency derived using signal-like evaluation samples from calibration measurements is estimated to be 5 %. This uncertainty is due to differences between signal and calibration samples.
P.D. Jones; L.R. Schimleck; G.F. Peter; R.F. Daniels; A. Clark
2005-01-01
Preliminary studies based on small sample sets show that near infrared (NIR) spectroscopy has the potential for rapidly estimating many important wood properties. However, if NIR is to be used operationally, then calibrations using several hundred samples from a wide variety of growing conditions need to be developed and their performance tested on samples from new...
The purpose of this SOP is to describe the procedure for sampling personal air for metals and pesticides during a predetermined time period. The SOP includes the set up of the samplers for collection of either a metals sample or a pesticides sample, the calibration and initial c...
Guild, Georgia E.; Stangoulis, James C. R.
2016-01-01
Within the HarvestPlus program there are many collaborators currently using X-Ray Fluorescence (XRF) spectroscopy to measure Fe and Zn in their target crops. In India, five HarvestPlus wheat collaborators have laboratories that conduct this analysis and their throughput has increased significantly. The benefits of using XRF are its ease of use, minimal sample preparation and high throughput analysis. The lack of commercially available calibration standards has led to a need for alternative calibration arrangements for many of the instruments. Consequently, the majority of instruments have either been installed with an electronic transfer of an original grain calibration set developed by a preferred lab, or a locally supplied calibration. Unfortunately, neither of these methods has been entirely successful. The electronic transfer is unable to account for small variations between the instruments, whereas the use of a locally provided calibration set is heavily reliant on the accuracy of the reference analysis method, which is particularly difficult to achieve when analyzing low levels of micronutrient. Consequently, we have developed a calibration method that uses non-matrix matched glass disks. Here we present the validation of this method and show this calibration approach can improve the reproducibility and accuracy of whole grain wheat analysis on 5 different XRF instruments across the HarvestPlus breeding program. PMID:27375644
Meat mixture detection in Iberian pork sausages.
Ortiz-Somovilla, V; España-España, F; De Pedro-Sanz, E J; Gaitán-Jurado, A J
2005-11-01
Five homogenized meat mixture treatments of Iberian (I) and/or Standard (S) pork were set up. Each treatment was analyzed by NIRS as a fresh product (N=75) and as dry-cured sausage (N=75). Spectra acquisition was carried out using DA 7000 equipment (Perten Instruments), obtaining a total of 750 spectra. Several absorption peaks and bands were selected as the most representative for homogenized dry-cured and fresh sausages. Discriminant analysis and mixture prediction equations were carried out based on the spectral data gathered. The best results using discriminant models were for fresh products, with 98.3% (calibration) and 60% (validation) correct classification. For dry-cured sausages 91.7% (calibration) and 80% (validation) of the samples were correctly classified. Models developed using mixture prediction equations showed SECV=4.7, r(2)=0.98 (calibration) and 73.3% of validation set were correctly classified for the fresh product. These values for dry-cured sausages were SECV=5.9, r(2)=0.99 (calibration) and 93.3% correctly classified for validation.
Knaack, Jennifer S; Zhou, Yingtao; Abney, Carter W; Prezioso, Samantha M; Magnuson, Matthew; Evans, Ronald; Jakubowski, Edward M; Hardy, Katelyn; Johnson, Rudolph C
2012-11-20
We have developed a novel immunomagnetic scavenging technique for extracting cholinesterase inhibitors from aqueous matrixes using biological targeting and antibody-based extraction. The technique was characterized using the organophosphorus nerve agent VX. The limit of detection for VX in high-performance liquid chromatography (HPLC)-grade water, defined as the lowest calibrator concentration, was 25 pg/mL in a small, 500 μL sample. The method was characterized over the course of 22 sample sets containing calibrators, blanks, and quality control samples. Method precision, expressed as the mean relative standard deviation, was less than 9.2% for all calibrators. Quality control sample accuracy was 102% and 100% of the mean for VX spiked into HPLC-grade water at concentrations of 2.0 and 0.25 ng/mL, respectively. This method successfully was applied to aqueous extracts from soil, hamburger, and finished tap water spiked with VX. Recovery was 65%, 81%, and 100% from these matrixes, respectively. Biologically based extractions of organophosphorus compounds represent a new technique for sample extraction that provides an increase in extraction specificity and sensitivity.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Biaxial Anisotropic Material Development and Characterization using Rectangular to Square Waveguide
2015-03-26
holder 68 Figure 29. Measurement Setup with Test port cables and Network Analyzer VNA and the waveguide adapters are torqued to specification with...calibrated torque wrenches and waveguide flanges are aligned using precision alignment pins. A TRL calibration is performed prior to measuring the sample as...set to 0.0001. This enables the Frequency domain solver to refine the mesh until the tolerance is achieved. Tightening the error tolerance results in
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2018-03-01
Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ouellette, G., Jr.; DeLong, K. L.
2016-02-01
High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
Xue, Gang; Song, Wen-qi; Li, Shu-chao
2015-01-01
In order to achieve the rapid identification of fire resistive coating for steel structure of different brands in circulating, a new method for the fast discrimination of varieties of fire resistive coating for steel structure by means of near infrared spectroscopy was proposed. The raster scanning near infrared spectroscopy instrument and near infrared diffuse reflectance spectroscopy were applied to collect the spectral curve of different brands of fire resistive coating for steel structure and the spectral data were preprocessed with standard normal variate transformation(standard normal variate transformation, SNV) and Norris second derivative. The principal component analysis (principal component analysis, PCA)was used to near infrared spectra for cluster analysis. The analysis results showed that the cumulate reliabilities of PC1 to PC5 were 99. 791%. The 3-dimentional plot was drawn with the scores of PC1, PC2 and PC3 X 10, which appeared to provide the best clustering of the varieties of fire resistive coating for steel structure. A total of 150 fire resistive coating samples were divided into calibration set and validation set randomly, the calibration set had 125 samples with 25 samples of each variety, and the validation set had 25 samples with 5 samples of each variety. According to the principal component scores of unknown samples, Mahalanobis distance values between each variety and unknown samples were calculated to realize the discrimination of different varieties. The qualitative analysis model for external verification of unknown samples is a 10% recognition ration. The results demonstrated that this identification method can be used as a rapid, accurate method to identify the classification of fire resistive coating for steel structure and provide technical reference for market regulation.
NASA Astrophysics Data System (ADS)
Rahn, Helene; Alexiou, Christoph; Trahms, Lutz; Odenbach, Stefan
2014-06-01
X-ray computed tomography is nowadays used for a wide range of applications in medicine, science and technology. X-ray microcomputed tomography (XμCT) follows the same principles used for conventional medical CT scanners, but improves the spatial resolution to a few micrometers. We present an example of an application of X-ray microtomography, a study of 3-dimensional biodistribution, as along with the quantification of nanoparticle content in tumoral tissue after minimally invasive cancer therapy. One of these minimal invasive cancer treatments is magnetic drug targeting, where the magnetic nanoparticles are used as controllable drug carriers. The quantification is based on a calibration of the XμCT-equipment. The developed calibration procedure of the X-ray-μCT-equipment is based on a phantom system which allows the discrimination between the various gray values of the data set. These phantoms consist of a biological tissue substitute and magnetic nanoparticles. The phantoms have been studied with XμCT and have been examined magnetically. The obtained gray values and nanoparticle concentration lead to a calibration curve. This curve can be applied to tomographic data sets. Accordingly, this calibration enables a voxel-wise assignment of gray values in the digital tomographic data set to nanoparticle content. Thus, the calibration procedure enables a 3-dimensional study of nanoparticle distribution as well as concentration.
Polarization Imaging Apparatus with Auto-Calibration
NASA Technical Reports Server (NTRS)
Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)
2013-01-01
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.
Polarization imaging apparatus with auto-calibration
Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui
2013-08-20
A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.
NASA Astrophysics Data System (ADS)
Fournier, A.; Morzfeld, M.; Hulot, G.
2013-12-01
For a suitable choice of parameters, the system of three ordinary differential equations (ODE) presented by Gissinger [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. In order to further assess the geophysical relevance of this low-dimensional model, we resort to data assimilation methods to calibrate it using reconstructions of the fluctuation of the virtual axial dipole moment spanning the past 2 millions years. Moreover, we test to which extent a properly calibrated model could possibly be used to predict a reversal of the geomagnetic field. We calibrate the ODE model to the geomagnetic field over the past 2 Ma using the SINT data set of Valet et al. [2]. To this end, we consider four data assimilation algorithms: the ensemble Kalman filter (EnKF), a variational method and two Monte Carlo (MC) schemes, prior importance sampling and implicit sampling. We observe that EnKF performs poorly and that prior importance sampling is inefficient. We obtain the most accurate reconstructions of the geomagnetic data using implicit sampling with five data points per assimilation sweep (of duration 5 kyr). The variational scheme performs equally well, but it does not provide us with quantitative information about the uncertainty of the estimates, which makes this method difficult to use for robust prediction under uncertainty. A calibration of the model using the PADM2M data set of Ziegler et al. [3] confirms these findings. We study the predictive capability of the ODE model using statistics computed from synthetic data experiments. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in real data), then calibrate the model to this record and then check if this calibrated model can correctly and reliably predict a reversal within the next 10 kyr (say). By performing 100 such experiments, we can assess how reliably our calibrated model can predict a (non-) reversal. It is found that the 5 kyr ahead predictions of reversals produced by the model appear to be accurate and reliable.These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. [1] Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 [2] Valet, J.-P., Meynadier, L. and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. [3] Ziegler, L. B., Constable, C. G., Johnson, C. L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.
Song, Tao; Zhang, Feng-ping; Liu, Yao-min; Wu, Zong-wen; Suo, You-rui
2012-08-01
In the present research, a novel method was established for determination of five fatty acids in soybean oil by transmission reflection-near infrared spectroscopy. The optimum conditions of mathematics model of five components (C16:0, C18:0, C18:1, C18:2 and C18:3) were studied, including the sample set selection, chemical value analysis, the detection methods and condition. Chemical value was analyzed by gas chromatography. One hundred fifty eight samples were selected, 138 for modeling set, 10 for testing set and 10 for unknown sample set. All samples were placed in sample pools and scanned by transmission reflection-near infrared spectrum after sonicleaning for 10 minute. The 1100-2500 nm spectral region was analyzed. The acquisition interval was 2 nm. Modified partial least square method was chosen for calibration mode creating. Result demonstrated that the 1-VR of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.8839, 0.5830, 0.9001, 0.9776 and 0.9596, respectively. And the SECV of five fatty acids between the reference value of the modeling sample set and the near infrared spectrum predictive value were 0.42, 0.29, 0.83, 0.46 and 0.21, respectively. The standard error of the calibration (SECV) of five fatty acids between the reference value of testing sample set and the near infrared spectrum predictive value were 0.891, 0.790, 0.900, 0.976 and 0.942, respectively. It was proved that the near infrared spectrum predictive value was linear with chemical value and the mathematical model established for fatty acids of soybean oil was feasible. For validation, 10 unknown samples were selected for analysis by near infrared spectrum. The result demonstrated that the relative standard deviation between predict value and chemical value was less than 5.50%. That was to say that transmission reflection-near infrared spectroscopy had a good veracity in analysis of fatty acids of soybean oil.
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
Switzer, P.; Harden, J.W.; Mark, R.K.
1988-01-01
A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.
Wang, Jun; Kliks, Michael M; Jun, Soojin; Jackson, Mel; Li, Qing X
2010-03-01
Quantitative analysis of glucose, fructose, sucrose, and maltose in different geographic origin honey samples in the world using the Fourier transform infrared (FTIR) spectroscopy and chemometrics such as partial least squares (PLS) and principal component regression was studied. The calibration series consisted of 45 standard mixtures, which were made up of glucose, fructose, sucrose, and maltose. There were distinct peak variations of all sugar mixtures in the spectral "fingerprint" region between 1500 and 800 cm(-1). The calibration model was successfully validated using 7 synthetic blend sets of sugars. The PLS 2nd-derivative model showed the highest degree of prediction accuracy with a highest R(2) value of 0.999. Along with the canonical variate analysis, the calibration model further validated by high-performance liquid chromatography measurements for commercial honey samples demonstrates that FTIR can qualitatively and quantitatively determine the presence of glucose, fructose, sucrose, and maltose in multiple regional honey samples.
Coedo, A G; Padilla, I; Dorado, M T
2004-12-01
This paper describes a study designed to determine the possibility of using a dried aerosol solution for calibration in laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The relative sensitivities of tested materials mobilized by laser ablation and by aqueous nebulization were established, and the experimentally determined relative sensitivity factors (RSFs) were used in conjunction with aqueous calibration for the analysis of solid steel samples. To such a purpose a set of CRM carbon steel samples (SS-451/1 to SS-460/1) were sampled into an ICP-MS instrument by solution nebulization using a microconcentric nebulizer with membrane desolvating (D-MCN) and by laser ablation (LA). Both systems were applied with the same ICP-MS operating parameters and the analyte signals were compared. The RSF (desolvated aerosol response/ablated solid response) values were close to 1 for the analytes Cr, Ni, Co, V, and W, about 1.3 for Mo, and 1.7 for As, P, and Mn. Complementary tests were carried out using CRM SS-455/1 as a solid standard for one-point calibration, applying LAMTRACE software for data reduction and quantification. The analytical results are in good agreement with the certified values in all cases, showing that the applicability of dried aerosol solutions is a good alternative calibration system for laser ablation sampling.
NASA Astrophysics Data System (ADS)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.
2018-03-01
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.
The Application of FT-IR Spectroscopy for Quality Control of Flours Obtained from Polish Producers
Ceglińska, Alicja; Reder, Magdalena; Ciemniewska-Żytkiewicz, Hanna
2017-01-01
Samples of wheat, spelt, rye, and triticale flours produced by different Polish mills were studied by both classic chemical methods and FT-IR MIR spectroscopy. An attempt was made to statistically correlate FT-IR spectral data with reference data with regard to content of various components, for example, proteins, fats, ash, and fatty acids as well as properties such as moisture, falling number, and energetic value. This correlation resulted in calibrated and validated statistical models for versatile evaluation of unknown flour samples. The calibration data set was used to construct calibration models with use of the CSR and the PLS with the leave one-out, cross-validation techniques. The calibrated models were validated with a validation data set. The results obtained confirmed that application of statistical models based on MIR spectral data is a robust, accurate, precise, rapid, inexpensive, and convenient methodology for determination of flour characteristics, as well as for detection of content of selected flour ingredients. The obtained models' characteristics were as follows: R2 = 0.97, PRESS = 2.14; R2 = 0.96, PRESS = 0.69; R2 = 0.95, PRESS = 1.27; R2 = 0.94, PRESS = 0.76, for content of proteins, lipids, ash, and moisture level, respectively. Best results of CSR models were obtained for protein, ash, and crude fat (R2 = 0.86; 0.82; and 0.78, resp.). PMID:28243483
Survival analysis with error-prone time-varying covariates: a risk set calibration approach
Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna
2010-01-01
Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.
We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less
Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.; ...
2015-01-27
We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less
Meijer, Piet; Kynde, Karin; van den Besselaar, Antonius M H P; Van Blerk, Marjan; Woods, Timothy A L
2018-04-12
This study was designed to obtain an overview of the analytical quality of the prothrombin time, reported as international normalized ratio (INR) and to assess the variation of INR results between European laboratories, the difference between Quick-type and Owren-type methods and the effect of using local INR calibration or not. In addition, we assessed the variation in INR results obtained for a single donation in comparison with a pool of several plasmas. A set of four different lyophilized plasma samples were distributed via national EQA organizations to participating laboratories for INR measurement. Between-laboratory variation was lower in the Owren group than in the Quick group (on average: 6.7% vs. 8.1%, respectively). Differences in the mean INR value between the Owren and Quick group were relatively small (<0.20 INR). Between-laboratory variation was lower after local INR calibration (CV: 6.7% vs. 8.6%). For laboratories performing local calibration, the between-laboratory variation was quite similar for the Owren and Quick group (on average: 6.5% and 6.7%, respectively). Clinically significant differences in INR results (difference in INR>0.5) were observed between different reagents. No systematic significant differences in the between-laboratory variation for a single-plasma sample and a pooled plasma sample were observed. The comparability for laboratories using local calibration of their thromboplastin reagent is better than for laboratories not performing local calibration. Implementing local calibration is strongly recommended for the measurement of INR.
NASA Astrophysics Data System (ADS)
Conti, Claudia; Realini, Marco; Colombo, Chiara; Botteon, Alessandra; Bertasa, Moira; Striova, Jana; Barucci, Marco; Matousek, Pavel
2016-12-01
We present a method for estimating the thickness of thin turbid layers using defocusing micro-spatially offset Raman spectroscopy (micro-SORS). The approach, applicable to highly turbid systems, enables one to predict depths in excess of those accessible with conventional Raman microscopy. The technique can be used, for example, to establish the paint layer thickness on cultural heritage objects, such as panel canvases, mural paintings, painted statues and decorated objects. Other applications include analysis in polymer, biological and biomedical disciplines, catalytic and forensics sciences where highly turbid overlayers are often present and where invasive probing may not be possible or is undesirable. The method comprises two stages: (i) a calibration step for training the method on a well characterized sample set with a known thickness, and (ii) a prediction step where the prediction of layer thickness is carried out non-invasively on samples of unknown thickness of the same chemical and physical make up as the calibration set. An illustrative example of a practical deployment of this method is the analysis of larger areas of paintings. In this case, first, a calibration would be performed on a fragment of painting of a known thickness (e.g. derived from cross-sectional analysis) and subsequently the analysis of thickness across larger areas of painting could then be carried out non-invasively. The performance of the method is compared with that of the more established optical coherence tomography (OCT) technique on identical sample set. This article is part of the themed issue "Raman spectroscopy in art and archaeology".
Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul
2010-11-01
In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.
An update on 'dose calibrator' settings for nuclides used in nuclear medicine.
Bergeron, Denis E; Cessna, Jeffrey T
2018-06-01
Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.
Refined Estimates of Carbon Abundances for Carbon-Enhanced Metal-Poor Stars
NASA Astrophysics Data System (ADS)
Rossi, S.; Placco, V. M.; Beers, T. C.; Marsteller, B.; Kennedy, C. R.; Sivarani, T.; Masseron, T.; Plez, B.
2008-03-01
We present results from a refined set of procedures for estimation of the metallicities ([Fe/H]) and carbon abundance ratios ([C/Fe]) based on a much larger sample of calibration objects (on the order of 500 stars) then were available to Rossi et al. (2005), due to a dramatic increase in the number of stars with measurements obtained from high-resolution analyses in the past few years. We compare results obtained from a new calibration of the KP and GP indices with that obtained from a custom set of spectral synthesis based on MOOG. In cases where the GP index approaches saturation, it is clear that only spectral synthesis achieve reliable results.
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P < 0.001). Conclusions A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A
2009-06-01
Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.
LIBS analysis of artificial calcified tissues matrices.
Kasem, M A; Gonzalez, J J; Russo, R E; Harith, M A
2013-04-15
In most laser-based analytical methods, the reproducibility of quantitative measurements strongly depends on maintaining uniform and stable experimental conditions. For LIBS analysis this means that for accurate estimation of elemental concentration, using the calibration curves obtained from reference samples, the plasma parameters have to be kept as constant as possible. In addition, calcified tissues such as bone are normally less "tough" in their texture than many samples, especially metals. Thus, the ablation process could change the sample morphological features rapidly, and result in poor reproducibility statistics. In the present work, three artificial reference sample sets have been fabricated. These samples represent three different calcium based matrices, CaCO3 matrix, bone ash matrix and Ca hydroxyapatite matrix. A comparative study of UV (266 nm) and IR (1064 nm) LIBS for these three sets of samples has been performed under similar experimental conditions for the two systems (laser energy, spot size, repetition rate, irradiance, etc.) to examine the wavelength effect. The analytical results demonstrated that UV-LIBS has improved reproducibility, precision, stable plasma conditions, better linear fitting, and the reduction of matrix effects. Bone ash could be used as a suitable standard reference material for calcified tissue calibration using LIBS with a 266 nm excitation wavelength. Copyright © 2013 Elsevier B.V. All rights reserved.
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Quentin, A G; Rodemann, T; Doutreleau, M-F; Moreau, M; Davies, N W; Millard, Peter
2017-01-31
Near-infrared reflectance spectroscopy (NIRS) is frequently used for the assessment of key nutrients of forage or crops but remains underused in ecological and physiological studies, especially to quantify non-structural carbohydrates. The aim of this study was to develop calibration models to assess the content in soluble sugars (fructose, glucose, sucrose) and starch in foliar material of Eucalyptus globulus. A partial least squares (PLS) regression was used on the sample spectral data and was compared to the contents measured using standard wet chemistry methods. The calibration models were validated using a completely independent set of samples. We used key indicators such as the ratio of prediction to deviation (RPD) and the range error ratio to give an assessment of the performance of the calibration models. Accurate calibration models were obtained for fructose and sucrose content (R2 > 0.85, root mean square error of prediction (RMSEP) of 0.95%–1.26% in the validation models), followed by sucrose and total soluble sugar content (R2 ~ 0.70 and RMSEP > 2.3%). In comparison to the others, calibration of the starch model performed very poorly with RPD = 1.70. This study establishes the ability of the NIRS calibration model to infer soluble sugar content in foliar samples of E. globulus in a rapid and cost-effective way. We suggest a complete redevelopment of the starch analysis using more specific quantification such as an HPLC-based technique to reach higher performance in the starch model. Overall, NIRS could serve as a high-throughput phenotyping tool to study plant response to stress factors.
NASA Astrophysics Data System (ADS)
Zaytsev, Sergey M.; Krylov, Ivan N.; Popov, Andrey M.; Zorov, Nikita B.; Labutin, Timur A.
2018-02-01
We have investigated matrix effects and spectral interferences on example of lead determination in different types of soils by laser induced breakdown spectroscopy (LIBS). Comparison between analytical performances of univariate and multivariate calibrations with the use of different laser wavelength for ablation (532, 355 and 266 nm) have been reported. A set of 17 soil samples (Ca-rich, Fe-rich, lean soils etc., 8.5-280 ppm of Pb) was involved into construction of the calibration models. Spectral interferences from main components (Ca, Fe, Ti, Mg) and trace components (Mn, Nb, Zr) were estimated by spectra modeling, and they were a reason for significant differences between the univariate calibration models obtained for a three different soil types (black, red, gray) separately. Implementation of 3rd harmonic of Nd:YAG laser in combination with multivariate calibration model based on PCR with 3 principal components provided the best analytical results: the RMSEC has been lowered down to 8 ppm. The sufficient improvement of the relative uncertainty (up to 5-10%) in comparison with univariate calibration was observed at the Pb concentration level > 50 ppm, while the problem of accuracy still remains for some samples with Pb concentration at the 20 ppm level. We have also discussed a few possible ways to estimate LOD without a blank sample. The most rigorous criterion has resulted in LOD of Pb in soils being 13 ppm. Finally, a good agreement between the values of lead content predicted by LIBS (46 ± 5 ppm) and XRF (42.1 ± 3.3 ppm) in the unknown soil sample from Lomonosov Moscow State University area was demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, T; Graham, C L; Sundsmo, T
This procedure provides instructions for the calibration and use of the Canberra iSolo Low Background Alpha/Beta Counting System (iSolo) that is used for counting air filters and swipe samples. This detector is capable of providing radioisotope identification (e.g., it can discriminate between radon daughters and plutonium). This procedure includes step-by-step instructions for: (1) Performing periodic or daily 'Background' and 'Efficiency QC' checks; (2) Setting-up the iSolo for counting swipes and air filters; (3) Counting swipes and air filters for alpha and beta activity; and (4) Annual calibration.
Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.
Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond
2018-04-01
We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.
NASA Astrophysics Data System (ADS)
Hernández-Almeida, I.; Cortese, G.; Yu, P.-S.; Chen, M.-T.; Kucera, M.
2017-08-01
Radiolarians are a very diverse microzooplanktonic group, often distributed in regionally restricted assemblages and responding to specific environmental factors. These properties of radiolarian assemblages make the group more conducive for the development and application of basin-wide ecological models. Here we use a new surface sediment data set from the western Pacific to demonstrate that ecological patterns derived from basin-wide open-ocean data sets cannot be transferred on semirestricted marginal seas. The data set consists of 160 surface sediment samples from three tropical-subtropical regions (East China Sea, South China Sea, and western Pacific), combining 54 new assemblage counts with taxonomically harmonized data from previous studies. Multivariate statistical analyses indicate that winter sea surface temperature at 10 m depth (SSTw) was the most significant environmental variable affecting the composition of radiolarian assemblages, allowing the development of an optimal calibration model (Locally Weighted-Weighted Averaging regression inverse deshrinking, R2cv = 0.88, root-mean-square error of prediction = 1.6°C). The dominant effect of SSTw on radiolarian assemblage composition in the western Pacific is attributed to the East Asian Winter Monsoon (EAWM), which is particularly strong in the marginal seas. To test the applicability of the calibration model on fossil radiolarian assemblages from the marginal seas, the calibration model was applied to two downcore records from the Okinawa Trough, covering the last 18 ka. We observe that these assemblages find most appropriate analogs among modern samples from the marginal basins (East China Sea and South China Sea). Downcore temperature reconstructions at both sites show similarities to known regional SST reconstructions, providing proof of concept for the new radiolarian-based SSTw calibration model.
Tang, Jun; Wang, Qing; Tong, Hong; Liao, Xiang; Zhang, Zheng-fang
2016-03-01
This work aimed to use attenuated total reflectance Fourier transform infrared spectroscopy to identify the lavender essential oil by establishing a Lavender variety and quality analysis model. So, 96 samples were tested. For all samples, the raw spectra were pretreated as second derivative, and to determine the 1 750-900 cm(-1) wavelengths for pattern recognition analysis on the basis of the variance calculation. The results showed that principal component analysis (PCA) can basically discriminate lavender oil cultivar and the first three principal components mainly represent the ester, alcohol and terpenoid substances. When the orthogonal partial least-squares discriminant analysis (OPLS-DA) model was established, the 68 samples were used for the calibration set. Determination coefficients of OPLS-DA regression curve were 0.959 2, 0.976 4, and 0.958 8 respectively for three varieties of lavender essential oil. Three varieties of essential oil's the root mean square error of prediction (RMSEP) in validation set were 0.142 9, 0.127 3, and 0.124 9, respectively. The discriminant rate of calibration set and the prediction rate of validation set had reached 100%. The model has the very good recognition capability to detect the variety and quality of lavender essential oil. The result indicated that a model which provides a quick, intuitive and feasible method had been built to discriminate lavender oils.
NASA Astrophysics Data System (ADS)
Rantakyrö, Fredrik T.
2017-09-01
"The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."
NASA Astrophysics Data System (ADS)
Cantiello, Michele; Blakeslee, John P.; Ferrarese, Laura; Côté, Patrick; Roediger, Joel C.; Raimondo, Gabriella; Peng, Eric W.; Gwyn, Stephen; Durrell, Patrick R.; Cuillandre, Jean-Charles
2018-04-01
We describe a program to measure surface brightness fluctuation (SBF) distances to galaxies observed in the Next Generation Virgo Cluster Survey (NGVS), a photometric imaging survey covering 104 deg2 of the Virgo cluster in the u*, g, i, and z bandpasses with the Canada–France–Hawaii Telescope. We describe the selection of the sample galaxies, the procedures for measuring the apparent i-band SBF magnitude {\\overline{m}}i, and the calibration of the absolute Mibar as a function of observed stellar population properties. The multiband NGVS data set provides multiple options for calibrating the SBF distances, and we explore various calibrations involving individual color indices as well as combinations of two different colors. Within the color range of the present sample, the two-color calibrations do not significantly improve the scatter with respect to wide-baseline, single-color calibrations involving u*. We adopt the ({u}* -z) calibration as a reference for the present galaxy sample, with an observed scatter of 0.11 mag. For a few cases that lack good u* photometry, we use an alternative relation based on a combination of (g-i) and (g-z) colors, with only a slightly larger observed scatter of 0.12 mag. The agreement of our measurements with the best existing distance estimates provides confidence that our measurements are accurate. We present a preliminary catalog of distances for 89 galaxies brighter than B T ≈ 13.0 mag within the survey footprint, including members of the background M and W Clouds at roughly twice the distance of the main body of the Virgo cluster. The extension of the present work to fainter and bluer galaxies is in progress.
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
NASA Astrophysics Data System (ADS)
Zheng, Lijuan; Cao, Fan; Xiu, Junshan; Bai, Xueshi; Motto-Ros, Vincent; Gilon, Nicole; Zeng, Heping; Yu, Jin
2014-09-01
Laser-induced breakdown spectroscopy (LIBS) provides a technique to directly determine metals in viscous liquids and especially in lubricating oils. A specific laser ablation configuration of a thin layer of oil applied on the surface of a pure aluminum target was used to evaluate the analytical figures of merit of LIBS for elemental analysis of lubricating oils. Among the analyzed oils, there were a certified 75cSt blank mineral oil, 8 virgin lubricating oils (synthetic, semi-synthetic, or mineral and of 2 different manufacturers), 5 used oils (corresponding to 5 among the 8 virgin oils), and a cooking oil. The certified blank oil and 4 virgin lubricating oils were spiked with metallo-organic standards to obtain laboratory reference samples with different oil matrix. We first established calibration curves for 3 elements, Fe, Cr, Ni, with the 5 sets of laboratory reference samples in order to evaluate the matrix effect by the comparison among the different oils. Our results show that generalized calibration curves can be built for the 3 analyzed elements by merging the measured line intensities of the 5 sets of spiked oil samples. Such merged calibration curves with good correlation of the merged data are only possible if no significant matrix effect affects the measurements of the different oils. In the second step, we spiked the remaining 4 virgin oils and the cooking oils with Fe, Cr and Ni. The accuracy and the precision of the concentration determination in these prepared oils were then evaluated using the generalized calibration curves. The concentrations of metallic elements in the 5 used lubricating oils were finally determined.
Douglas, R K; Nawar, S; Alamar, M C; Mouazen, A M; Coulon, F
2018-03-01
Visible and near infrared spectrometry (vis-NIRS) coupled with data mining techniques can offer fast and cost-effective quantitative measurement of total petroleum hydrocarbons (TPH) in contaminated soils. Literature showed however significant differences in the performance on the vis-NIRS between linear and non-linear calibration methods. This study compared the performance of linear partial least squares regression (PLSR) with a nonlinear random forest (RF) regression for the calibration of vis-NIRS when analysing TPH in soils. 88 soil samples (3 uncontaminated and 85 contaminated) collected from three sites located in the Niger Delta were scanned using an analytical spectral device (ASD) spectrophotometer (350-2500nm) in diffuse reflectance mode. Sequential ultrasonic solvent extraction-gas chromatography (SUSE-GC) was used as reference quantification method for TPH which equal to the sum of aliphatic and aromatic fractions ranging between C 10 and C 35 . Prior to model development, spectra were subjected to pre-processing including noise cut, maximum normalization, first derivative and smoothing. Then 65 samples were selected as calibration set and the remaining 20 samples as validation set. Both vis-NIR spectrometry and gas chromatography profiles of the 85 soil samples were subjected to RF and PLSR with leave-one-out cross-validation (LOOCV) for the calibration models. Results showed that RF calibration model with a coefficient of determination (R 2 ) of 0.85, a root means square error of prediction (RMSEP) 68.43mgkg -1 , and a residual prediction deviation (RPD) of 2.61 outperformed PLSR (R 2 =0.63, RMSEP=107.54mgkg -1 and RDP=2.55) in cross-validation. These results indicate that RF modelling approach is accounting for the nonlinearity of the soil spectral responses hence, providing significantly higher prediction accuracy compared to the linear PLSR. It is recommended to adopt the vis-NIRS coupled with RF modelling approach as a portable and cost effective method for the rapid quantification of TPH in soils. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1978-01-01
Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fang, Li-Min; Lin, Min
2009-08-01
For the rapid detection of the ethanol, pH and rest sugar in red wine, infrared (IR) spectra of 44 wine samples were analyzed. The algorithm of fast independent component analysis (FastICA) was used to decompose the data of IR spectra, and their independent components and the mixing matrix were obtained. Then, the ICA-NNR calibration model with three-level artificial neural network (ANN) structure was built by using back-propagation (BP) algorithm. The models were used to estimate the contents of ethanol, pH and rest sugar in red wine samples for both in calibration set and predicted set. Correlation coefficient (r) of prediction and root mean square error of prediction (RMSEP) were used as the evaluation indexes. The results indicate that the r and RMSEP for the prediction of ethanol content, pH and rest sugar content are 0.953, 0.983 and 0.994, and 0.161, 0.017 and 0.181, respectively. The maximum relative deviations between the ICA-NNR method predicted value and referenced value of the 22 samples in predicted set are less than 4%. The results of this paper provide a foundation for the application and further development of IR on-line red wine analyzer.
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P < 0.001). A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Characterization of plastic blends made from mixed plastics waste of different sources.
Turku, Irina; Kärki, Timo; Rinne, Kimmo; Puurtinen, Ari
2017-02-01
This paper studies the recyclability of construction and household plastic waste collected from local landfills. Samples were processed from mixed plastic waste by injection moulding. In addition, blends of pure plastics, polypropylene and polyethylene were processed as a reference set. Reference samples with known plastic ratio were used as the calibration set for quantitative analysis of plastic fractions in recycled blends. The samples were tested for the tensile properties; scanning electron microscope-energy-dispersive X-ray spectroscopy was used for elemental analysis of the blend surfaces and Fourier transform infrared (FTIR) analysis was used for the quantification of plastics contents.
Fourier transform infrared spectroscopy for Kona coffee authentication.
Wang, Jun; Jun, Soojin; Bittenbender, H C; Gautz, Loren; Li, Qing X
2009-06-01
Kona coffee, the variety of "Kona typica" grown in the north and south districts of Kona-Island, carries a unique stamp of the region of Big Island of Hawaii, U.S.A. The excellent quality of Kona coffee makes it among the best coffee products in the world. Fourier transform infrared (FTIR) spectroscopy integrated with an attenuated total reflectance (ATR) accessory and multivariate analysis was used for qualitative and quantitative analysis of ground and brewed Kona coffee and blends made with Kona coffee. The calibration set of Kona coffee consisted of 10 different blends of Kona-grown original coffee mixture from 14 different farms in Hawaii and a non-Kona-grown original coffee mixture from 3 different sampling sites in Hawaii. Derivative transformations (1st and 2nd), mathematical enhancements such as mean centering and variance scaling, multivariate regressions by partial least square (PLS), and principal components regression (PCR) were implemented to develop and enhance the calibration model. The calibration model was successfully validated using 9 synthetic blend sets of 100% Kona coffee mixture and its adulterant, 100% non-Kona coffee mixture. There were distinct peak variations of ground and brewed coffee blends in the spectral "fingerprint" region between 800 and 1900 cm(-1). The PLS-2nd derivative calibration model based on brewed Kona coffee with mean centering data processing showed the highest degree of accuracy with the lowest standard error of calibration value of 0.81 and the highest R(2) value of 0.999. The model was further validated by quantitative analysis of commercial Kona coffee blends. Results demonstrate that FTIR can be a rapid alternative to authenticate Kona coffee, which only needs very quick and simple sample preparations.
Rudolff, Andrea S; Moens, Yves P S; Driessen, Bernd; Ambrisko, Tamas D
2014-07-01
To assess agreement between infrared (IR) analysers and a refractometer for measurements of isoflurane, sevoflurane and desflurane concentrations and to demonstrate the effect of customized calibration of IR analysers. In vitro experiment. Six IR anaesthetic monitors (Datex-Ohmeda) and a single portable refractometer (Riken). Both devices were calibrated following the manufacturer's recommendations. Gas samples were collected at common gas outlets of anaesthesia machines. A range of agent concentrations was produced by stepwise changes in dial settings: isoflurane (0-5% in 0.5% increments), sevoflurane (0-8% in 1% increments), or desflurane (0-18% in 2% increments). Oxygen flow was 2 L minute(-1) . The orders of testing IR analysers, agents and dial settings were randomized. Duplicate measurements were performed at each setting. The entire procedure was repeated 24 hours later. Bland-Altman analysis was performed. Measurements on day-1 were used to yield calibration equations (IR measurements as dependent and refractometry measurements as independent variables), which were used to modify the IR measurements on day-2. Bias ± limits of agreement for isoflurane, sevoflurane and desflurane were 0.2 ± 0.3, 0.1 ± 0.4 and 0.7 ± 0.9 volume%, respectively. There were significant linear relationships between differences and means for all agents. The IR analysers became less accurate at higher gas concentrations. After customized calibration, the bias became almost zero and the limits of agreement became narrower. If similar IR analysers are used in research studies, they need to be calibrated against a reference method using the agent in question at multiple calibration points overlapping the range of interest. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
NASA Astrophysics Data System (ADS)
Sinescu, C.; Bradu, A.; Duma, V.-F.; Topala, F. I.; Negrutiu, M. L.; Podoleanu, A. G.
2018-02-01
We present a recent investigation regarding the use of optical coherence tomography (OCT) in the monitoring of the calibration loss of sintering ovens for the manufacturing of metal ceramic dental prostheses. Differences in the temperatures of such ovens with regard to their specifications lead to stress and even cracks in the prostheses material, therefore to the failure of the dental treatment. Evaluation methods of the ovens calibration consist nowadays of firing supplemental samples; this is subjective, expensive, and time consuming. Using an in-house developed swept source (SS) OCT system, we have demonstrated that a quantitative assessment of the internal structure of the prostheses, therefore of the temperature settings of the ovens can be made. Using en-face OCT images acquired at similar depths inside the samples, the differences in reflectivity allow for the evaluation of the differences in granulation (i.e., in number and size of ceramic grains) of the prostheses material. Fifty samples, divided in five groups, each sintered at different temperatures (lower, higher, or equal to the prescribed one) have been analyzed. The consequences of the temperature variations with regard to the one prescribed were determined. Rules-of-thumb were extracted to monitor objectively, using only OCT images of currently manufactured samples, the settings of the oven. The method proposed allows for avoiding producing prostheses with defects. While such rules-of-thumb achieve a qualitative assessment, an insight in our on-going work on the quantitative assessment of such losses of calibration on dental ovens using OCT is also made.
[Determination of wine original regions using information fusion of NIR and MIR spectroscopy].
Xiang, Ling-Li; Li, Meng-Hua; Li, Jing-Mingz; Li, Jun-Hui; Zhang, Lu-Da; Zhao, Long-Lian
2014-10-01
Geographical origins of wine grapes are significant factors affecting wine quality and wine prices. Tasters' evaluation is a good method but has some limitations. It is important to discriminate different wine original regions quickly and accurately. The present paper proposed a method to determine wine original regions based on Bayesian information fusion that fused near-infrared (NIR) transmission spectra information and mid-infrared (MIR) ATR spectra information of wines. This method improved the determination results by expanding the sources of analysis information. NIR spectra and MIR spectra of 153 wine samples from four different regions of grape growing were collected by near-infrared and mid-infrared Fourier transform spe trometer separately. These four different regions are Huailai, Yantai, Gansu and Changli, which areall typical geographical originals for Chinese wines. NIR and MIR discriminant models for wine regions were established using partial least squares discriminant analysis (PLS-DA) based on NIR spectra and MIR spectra separately. In PLS-DA, the regions of wine samples are presented in group of binary code. There are four wine regions in this paper, thereby using four nodes standing for categorical variables. The output nodes values for each sample in NIR and MIR models were normalized first. These values stand for the probabilities of each sample belonging to each category. They seemed as the input to the Bayesian discriminant formula as a priori probability value. The probabilities were substituteed into the Bayesian formula to get posterior probabilities, by which we can judge the new class characteristics of these samples. Considering the stability of PLS-DA models, all the wine samples were divided into calibration sets and validation sets randomly for ten times. The results of NIR and MIR discriminant models of four wine regions were as follows: the average accuracy rates of calibration sets were 78.21% (NIR) and 82.57% (MIR), and the average accuracy rates of validation sets were 82.50% (NIR) and 81.98% (MIR). After using the method proposed in this paper, the accuracy rates of calibration and validation changed to 87.11% and 90.87% separately, which all achieved better results of determination than individual spectroscopy. These results suggest that Bayesian information fusion of NIR and MIR spectra is feasible for fast identification of wine original regions.
Report: Industrial Hygiene: Safer Working through Analytical Chemistry.
ERIC Educational Resources Information Center
Hemingway, Ronald E.
1980-01-01
The analytical chemist is involved in the recognition, evaluation, and control of chemical hazards in the workplace environment. These goals can be achieved by setting up a monitoring program; this should be a combination of planning, calibration, sampling, and analysis of toxic substances. (SMB)
Modeling the effect of laser heating on the strength and failure of 7075-T6 aluminum
Florando, J. N.; Margraf, J. D.; Reus, J. F.; ...
2015-06-06
The effect of rapid laser heating on the response of 7075-T6 aluminum has been characterized using 3-D digital image correlation and a series of thermocouples. The experimental results indicate that as the samples are held under a constant load, the heating from the laser profile causes non-uniform temperature and strain fields, and the strain-rate increases dramatically as the sample nears failure. Simulations have been conducted using the LLNL multi-physics code ALE3D, and compared to the experiments. The strength and failure of the material was modeled using the Johnson–Cook strength and damage models. Here, in order to capture the response, amore » dual-condition criterion was utilized which calibrated one set of parameters to low temperature quasi-static strain rate data, while the other parameter set is calibrated to high temperature high strain rate data. The thermal effects were captured using temperature dependent thermal constants and invoking thermal transport with conduction, convection, and thermal radiation.« less
Predicting geomagnetic reversals via data assimilation: a feasibility study
NASA Astrophysics Data System (ADS)
Morzfeld, Matthias; Fournier, Alexandre; Hulot, Gauthier
2014-05-01
The system of three ordinary differential equations (ODE) presented by Gissinger in [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. We explore the geophysical relevance of this low-dimensional model via data assimilation, i.e. we update the solution of the ODE with information from data of the dipole variable. The data set we use is 'SINT' (Valet et al. [2]), and it provides the signed virtual axial dipole moment over the past 2 millions years. We can obtain an accurate reconstruction of these dipole data using implicit sampling (a fully nonlinear Monte Carlo sampling strategy) and assimilating 5 kyr of data per sweep. We confirm our calibration of the model using the PADM2M dipole data set of Ziegler et al. [3]. The Monte Carlo sampling strategy provides us with quantitative information about the uncertainty of our estimates, and -in principal- we can use this information for making (robust) predictions under uncertainty. We perform synthetic data experiments to explore the predictive capability of the ODE model updated by data assimilation. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in the SINT data), calibrate the model to this record, and then check if this calibrated model can reliably predict a reversal within the next 5 kyr. By performing a large number of such experiments, we can estimate the statistics that describe how reliably our calibrated model can predict a reversal of the geomagnetic field. It is found that the 1 kyr-ahead predictions of reversals produced by the model appear to be accurate and reliable. These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. References Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood moidel of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.
Chirila, Madalina M; Sarkisian, Khachatur; Andrew, Michael E; Kwon, Cheol-Woong; Rando, Roy J; Harper, Martin
2015-04-01
The current measurement method for occupational exposure to wood dust is by gravimetric analysis and is thus non-specific. In this work, diffuse reflection infrared Fourier transform spectroscopy (DRIFTS) for the analysis of only the wood component of dust was further evaluated by analysis of the same samples between two laboratories. Field samples were collected from six wood product factories using 25-mm glass fiber filters with the Button aerosol sampler. Gravimetric mass was determined in one laboratory by weighing the filters before and after aerosol collection. Diffuse reflection mid-infrared spectra were obtained from the wood dust on the filter which is placed on a motorized stage inside the spectrometer. The metric used for the DRIFTS analysis was the intensity of the carbonyl band in cellulose and hemicellulose at ~1735 cm(-1). Calibration curves were constructed separately in both laboratories using the same sets of prepared filters from the inhalable sampling fraction of red oak, southern yellow pine, and western red cedar in the range of 0.125-4 mg of wood dust. Using the same procedure in both laboratories to build the calibration curve and analyze the field samples, 62.3% of the samples measured within 25% of the average result with a mean difference between the laboratories of 18.5%. Some observations are included as to how the calibration and analysis can be improved. In particular, determining the wood type on each sample to allow matching to the most appropriate calibration increases the apparent proportion of wood dust in the sample and this likely provides more realistic DRIFTS results. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.
Aw, Wen C; Ballard, J William O
2013-10-01
The age structure of natural population is of interest in physiological, life history and ecological studies but it is often difficult to determine. One methodological problem is that samples may need to be invasively sampled preventing subsequent taxonomic curation. A second problem is that it can be very expensive to accurately determine the age structure of given population because large sample sizes are often necessary. In this study, we test the effects of temperature (17 °C, 23 °C and 26 °C) and diet (standard cornmeal and low calorie diet) on the accuracy of the non-invasive, inexpensive and high throughput near-infrared spectroscopy (NIRS) technique to determine the age of Drosophila flies. Composite and simplified calibration models were developed for each sex. Independent sets for each temperature and diet treatments with flies not involved in calibration model were then used to validate the accuracy of the calibration models. The composite NIRS calibration model was generated by including flies reared under all temperatures and diets. This approach permits rapid age measurement and age structure determination in large population of flies as less than or equal to 9 days, or more than 9 days old with 85-97% and 64-99% accuracy, respectively. The simplified calibration models were generated by including flies reared at 23 °C on standard diet. Low accuracy rates were observed when simplified calibration models were used to identify (a) Drosophila reared at 17 °C and 26 °C and (b) 23 °C with low calorie diet. These results strongly suggest that appropriate calibration models need to be developed in the laboratory before this technique can be reliably used in field. These calibration models should include the major environmental variables that change across space and time in the particular natural population to be studied. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bayes classification of interferometric TOPSAR data
NASA Technical Reports Server (NTRS)
Michel, T. R.; Rodriguez, E.; Houshmand, B.; Carande, R.
1995-01-01
We report the Bayes classification of terrain types at different sites using airborne interferometric synthetic aperture radar (INSAR) data. A Gaussian maximum likelihood classifier was applied on multidimensional observations derived from the SAR intensity, the terrain elevation model, and the magnitude of the interferometric correlation. Training sets for forested, urban, agricultural, or bare areas were obtained either by selecting samples with known ground truth, or by k-means clustering of random sets of samples uniformly distributed across all sites, and subsequent assignments of these clusters using ground truth. The accuracy of the classifier was used to optimize the discriminating efficiency of the set of features that was chosen. The most important features include the SAR intensity, a canopy penetration depth model, and the terrain slope. We demonstrate the classifier's performance across sites using a unique set of training classes for the four main terrain categories. The scenes examined include San Francisco (CA) (predominantly urban and water), Mount Adams (WA) (forested with clear cuts), Pasadena (CA) (urban with mountains), and Antioch Hills (CA) (water, swamps, fields). Issues related to the effects of image calibration and the robustness of the classification to calibration errors are explored. The relative performance of single polarization Interferometric data classification is contrasted against classification schemes based on polarimetric SAR data.
Screening experiments of ecstasy street samples using near infrared spectroscopy.
Sondermann, N; Kovar, K A
1999-12-20
Twelve different sets of confiscated ecstasy samples were analysed applying both near infrared spectroscopy in reflectance mode (1100-2500 nm) and high-performance liquid chromatography (HPLC). The sets showed a large variance in composition. A calibration data set was generated based on the theory of factorial designs. It contained 221 N-methyl-3,4-methylenedioxyamphetamine (MDMA) samples, 167 N-ethyl-3,4-methylenedioxyamphetamine (MDE), 111 amphetamine and 106 samples without a controlled substance, which will be called placebo samples thereafter. From this data set, PLS-1 models were calculated and were successfully applied for validation of various external laboratory test sets. The transferability of these results to confiscated tablets is demonstrated here. It is shown that differentiation into placebo, amphetamine and ecstasy samples is possible. Analysis of intact tablets is practicable. However, more reliable results are obtained from pulverised samples. This is due to ill-defined production procedures. The use of mathematically pretreated spectra improves the prediction quality of all the PLS-1 models studied. It is possible to improve discrimination between MDE and MDMA with the help of a second model based on raw spectra. Alternative strategies are briefly discussed.
Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems
Kong, Wenwen; Zhang, Chu; Huang, Weihao
2018-01-01
Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315
NASA Technical Reports Server (NTRS)
Mahaffy, Paul R.
2012-01-01
The measurement goals of the Sample Analysis at Mars (SAM) instrument suite on the "Curiosity" Rover of the Mars Science Laboratory (MSL) include chemical and isotopic analysis of organic and inorganic volatiles for both atmospheric and solid samples [1,2]. SAM directly supports the ambitious goals of the MSL mission to provide a quantitative assessment of habitability and preservation in Gale crater by means of a range of chemical and geological measurements [3]. The SAM FM combined calibration and environmental testing took place primarily in 2010 with a limited set of tests implemented after integration into the rover in January 2011. The scope of SAM FM testing was limited both to preserve SAM consumables such as life time of its electromechanical elements and to minimize the level of terrestrial contamination in the SAM instrument. A more comprehensive calibration of a SAM-like suite of instruments will be implemented in 2012 with calibration runs planned for the SAM testbed. The SAM Testbed is nearly identical to the SAM FM and operates in a ambient pressure chamber. The SAM Instrument Suite: SAM's instruments are a Quadrupole Mass Spectrometer (QMS), a 6-column Gas Chromatograph (GC), and a 2-channel Tunable Laser Spectrometer (TLS). Gas Chromatography Mass Spectrometry is designed for identification of even trace organic compounds. The TLS [5] secures the C, H, and O isotopic composition in carbon dioxide, water, and methane. Sieved materials are delivered from the MSL sample acquisition and processing system to one of68 cups of the Sample Manipulation System (SMS). 59 of these cups are fabricated from inert quartz. After sample delivery, a cup is inserted into one of 2 ovens for evolved gas analysis (EGA ambient to >9500C) by the QMS and TLS. A portion of the gas released can be trapped and subsequently analyzed by GCMS. Nine sealed cups contain liquid solvents and chemical derivatization or thermochemolysis agents to extract and transform polar molecules such as amino acids, nucleobases, and carboxylic acids into compounds that are sufficiently volatile to transmit through the GC columns. The remaining 6 cups contain calibrants. SAM FM Calibration Overview: The SAM FM calibration in the Mars chamber employed a variety of pure gases, gas mixtures, and solid materials. Isotope calibration runs for the TLS utilized 13C enriched C02 standards and 0 enriched CH4. A variety of fluorocarbon compounds that spanned the entire mass range of the QMS as well as C3-C6 hydrocarbons were utilized for calibration of the GCMS. Solid samples consisting of a mixture of calcite, melanterite, and inert silica glass either doped or not with fluorocarbons were introduced into the SAM FM cups through the SAM inlet funnel/tube system.
Near-infrared spectroscopy used to predict soybean seed germination and vigor
USDA-ARS?s Scientific Manuscript database
The potential of using near-infrared (NIR) spectroscopy for differentiating levels in germination, vigor, and electrical conductivity of soybean seeds was investigated. For the 243 spectral data collected using the Perten DA7200, stratified sampling was used to obtain three calibration sets consisti...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
Elephant Seals and Temperature Data: Calibrations and Limitations.
NASA Astrophysics Data System (ADS)
Simmons, S. E.; Tremblay, Y.; Costa, D. P.
2006-12-01
In recent years with technological advances, instruments deployed on diving marine animals have been used to sample the environment in addition to their behavior. Of all oceanographic variables one of the most valuable and easiest to record is temperature. Here we report on a series of lab calibration and field validation experiments that consider the accuracy of temperature measurements from animal borne ocean samplers. Additionally we consider whether sampling frequency or animal behavior affects the quality of the temperature data collected by marine animals. Rapid response, external temperature sensors on eight Wildlife Computers MK9 time-depth recorders (TDRs) were calibrated using water baths at the Naval Postgraduate School (Monterey, CA). These water baths are calibrated using a platinum thermistor to 0.001° C. Instruments from different production batches were calibrated before and after deployments on adult female northern elephant seals, to examine tag performance over time and under `normal' usage. Tag performance in the field was validated by comparisons with temperature data from a Seabird CTD. In April/May of 2004, casts to 200m were performed over the Monterey Canyon using a CTD array carrying MK9s. These casts were performed before and after the release of a juvenile elephant seal from the boat. The seal was also carrying an MK9 TDR, allowing the assessment of any animal effect on temperature profiles. Sampling frequency during these field validations was set at one second intervals and the data from TDRs on both the CTD and the seals was sub-sampled at four, eight, 30 and 300 (5 min) seconds. The sub-sampled data was used to determine thermocline depth, a thermocline depth zone and temperature gradients and assess whether sampling frequency or animal behavior affects the quality of temperature data. Preliminary analyses indicate that temperature sensors deployed on elephant seals can provide water column temperature data of high quality and precision.
Masili, Alice; Puligheddu, Sonia; Sassu, Lorenzo; Scano, Paola; Lai, Adolfo
2012-11-01
In this work, we report the feasibility study to predict the properties of neat crude oil samples from 300-MHz NMR spectral data and partial least squares (PLS) regression models. The study was carried out on 64 crude oil samples obtained from 28 different extraction fields and aims at developing a rapid and reliable method for characterizing the crude oil in a fast and cost-effective way. The main properties generally employed for evaluating crudes' quality and behavior during refining were measured and used for calibration and testing of the PLS models. Among these, the UOP characterization factor K (K(UOP)) used to classify crude oils in terms of composition, density (D), total acidity number (TAN), sulfur content (S), and true boiling point (TBP) distillation yields were investigated. Test set validation with an independent set of data was used to evaluate model performance on the basis of standard error of prediction (SEP) statistics. Model performances are particularly good for K(UOP) factor, TAN, and TPB distillation yields, whose standard error of calibration and SEP values match the analytical method precision, while the results obtained for D and S are less accurate but still useful for predictions. Furthermore, a strategy that reduces spectral data preprocessing and sample preparation procedures has been adopted. The models developed with such an ample crude oil set demonstrate that this methodology can be applied with success to modern refining process requirements. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Taylor, S.; Mcpeters, R. D.; Wellemeyer, C.
1995-01-01
The concept of the well-known Langley plot technique, used for the calibration of ground-based instruments, has been generalized for application to satellite instruments. In polar regions, near summer solstice, the solar backscattered ultraviolet (SBUV) instrument on the Nimbus 7 satellite samples the same ozone field at widely different solar zenith angles. These measurements are compared to assess the long-term drift in the instrument calibration. Although the technique provides only a relative wavelength-to-wavelength calibration, it can be combined with existing techniques to determine the drift of the instrument at any wavelength. Using this technique, we have generated a 12-year data set of ozone vertical profiles from SBUV with an estimated accuracy of +/- 5% at 1 mbar and +/- 2% at 10 mbar (95% confidence) over 12 years. Since the method is insensitive to true changes in the atmospheric ozone profile, it can also be used to compare the calibrations of similar SBUV instruments launched without temporal overlap.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Photometric redshift analysis in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Sánchez, C.; Carrasco Kind, M.; Lin, H.; Miquel, R.; Abdalla, F. B.; Amara, A.; Banerji, M.; Bonnett, C.; Brunner, R.; Capozzi, D.; Carnero, A.; Castander, F. J.; da Costa, L. A. N.; Cunha, C.; Fausti, A.; Gerdes, D.; Greisel, N.; Gschwend, J.; Hartley, W.; Jouvel, S.; Lahav, O.; Lima, M.; Maia, M. A. G.; Martí, P.; Ogando, R. L. C.; Ostrovski, F.; Pellegrini, P.; Rau, M. M.; Sadeh, I.; Seitz, S.; Sevilla-Noarbe, I.; Sypniewski, A.; de Vicente, J.; Abbot, T.; Allam, S. S.; Atlee, D.; Bernstein, G.; Bernstein, J. P.; Buckley-Geer, E.; Burke, D.; Childress, M. J.; Davis, T.; DePoy, D. L.; Dey, A.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A.; Fernández, E.; Finley, D.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Glazebrook, K.; Honscheid, K.; Kim, A.; Kuehn, K.; Kuropatkin, N.; Lidman, C.; Makler, M.; Marshall, J. L.; Nichol, R. C.; Roodman, A.; Sánchez, E.; Santiago, B. X.; Sako, M.; Scalzo, R.; Smith, R. C.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Uddin, S. A.; Valdés, F.; Walker, A.; Yuan, F.; Zuntz, J.
2014-12-01
We present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early 2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method in a multidimensional colour-magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. Empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions σ68 ˜ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets.
Photometric redshift analysis in the Dark Energy Survey Science Verification data
Sanchez, C.; Carrasco Kind, M.; Lin, H.; ...
2014-10-09
In this study, we present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early 2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method inmore » a multidimensional colour–magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. In addition, empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions σ68 ~ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets.« less
NASA Astrophysics Data System (ADS)
Li, Baoxin; Wang, Dongmei; Lv, Jiagen; Zhang, Zhujun
2006-09-01
In this paper, a flow-injection chemiluminescence (CL) system is proposed for simultaneous determination of Co(II) and Cr(III) with partial least squares calibration. This method is based on the fact that both Co(II) and Cr(III) catalyze the luminol-H 2O 2 CL reaction, and that their catalytic activities are significantly different on the same reaction condition. The CL intensity of Co(II) and Cr(III) was measured and recorded at different pH of reaction medium, and the obtained data were processed by the chemometric approach of partial least squares. The experimental calibration set was composed with nine sample solutions using orthogonal calibration design for two component mixtures. The calibration curve was linear over the concentration range of 2 × 10 -7 to 8 × 10 -10 and 2 × 10 -6 to 4 × 10 -9 g/ml for Co(II) and Cr(III), respectively. The proposed method offers the potential advantages of high sensitivity, simplicity and rapidity for Co(II) and Cr(III) determination, and was successfully applied to the simultaneous determination of both analytes in real water sample.
Foster, Charles S P; Sauquet, Hervê; van der Merwe, Marlien; McPherson, Hannah; Rossetto, Maurizio; Ho, Simon Y W
2017-05-01
The evolutionary timescale of angiosperms has long been a key question in biology. Molecular estimates of this timescale have shown considerable variation, being influenced by differences in taxon sampling, gene sampling, fossil calibrations, evolutionary models, and choices of priors. Here, we analyze a data set comprising 76 protein-coding genes from the chloroplast genomes of 195 taxa spanning 86 families, including novel genome sequences for 11 taxa, to evaluate the impact of models, priors, and gene sampling on Bayesian estimates of the angiosperm evolutionary timescale. Using a Bayesian relaxed molecular-clock method, with a core set of 35 minimum and two maximum fossil constraints, we estimated that crown angiosperms arose 221 (251-192) Ma during the Triassic. Based on a range of additional sensitivity and subsampling analyses, we found that our date estimates were generally robust to large changes in the parameters of the birth-death tree prior and of the model of rate variation across branches. We found an exception to this when we implemented fossil calibrations in the form of highly informative gamma priors rather than as uniform priors on node ages. Under all other calibration schemes, including trials of seven maximum age constraints, we consistently found that the earliest divergences of angiosperm clades substantially predate the oldest fossils that can be assigned unequivocally to their crown group. Overall, our results and experiments with genome-scale data suggest that reliable estimates of the angiosperm crown age will require increased taxon sampling, significant methodological changes, and new information from the fossil record. [Angiospermae, chloroplast, genome, molecular dating, Triassic.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A
2011-04-05
Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.
Sun, Tong; Xu, Wen-Li; Hu, Tian; Liu, Mu-Hua
2013-12-01
The objective of the present research was to assess soluble solids content (SSC) of Nanfeng mandarin by visible/near infrared (Vis/NIR) spectroscopy combined with new variable selection method, simplify prediction model and improve the performance of prediction model for SSC of Nanfeng mandarin. A total of 300 Nanfeng mandarin samples were used, the numbers of Nanfeng mandarin samples in calibration, validation and prediction sets were 150, 75 and 75, respectively. Vis/NIR spectra of Nanfeng mandarin samples were acquired by a QualitySpec spectrometer in the wavelength range of 350-1000 nm. Uninformative variables elimination (UVE) was used to eliminate wavelength variables that had few information of SSC, then independent component analysis (ICA) was used to extract independent components (ICs) from spectra that eliminated uninformative wavelength variables. At last, least squares support vector machine (LS-SVM) was used to develop calibration models for SSC of Nanfeng mandarin using extracted ICs, and 75 prediction samples that had not been used for model development were used to evaluate the performance of SSC model of Nanfeng mandarin. The results indicate t hat Vis/NIR spectroscopy combinedwith UVE-ICA-LS-SVM is suitable for assessing SSC o f Nanfeng mandarin, and t he precision o f prediction ishigh. UVE--ICA is an effective method to eliminate uninformative wavelength variables, extract important spectral information, simplify prediction model and improve the performance of prediction model. The SSC model developed by UVE-ICA-LS-SVM is superior to that developed by PLS, PCA-LS-SVM or ICA-LS-SVM, and the coefficient of determination and root mean square error in calibration, validation and prediction sets were 0.978, 0.230%, 0.965, 0.301% and 0.967, 0.292%, respectively.
Clinical results from a noninvasive blood glucose monitor
NASA Astrophysics Data System (ADS)
Blank, Thomas B.; Ruchti, Timothy L.; Lorenz, Alex D.; Monfre, Stephen L.; Makarewicz, M. R.; Mattu, Mutua; Hazen, Kevin
2002-05-01
Non-invasive blood glucose monitoring has long been proposed as a means for advancing the management of diabetes through increased measurement and control. The use of a near-infrared, NIR, spectroscopy based methodology for noninvasive monitoring has been pursued by a number of groups. The accuracy of the NIR measurement technology is limited by challenges related to the instrumentation, the heterogeneity and time-variant nature of skin tissue, and the complexity of the calibration methodology. In this work, we discuss results from a clinical study that targeted the evaluation of individual calibrations for each subject based on a series of controlled calibration visits. While the customization of the calibrations to individuals was intended to reduce model complexity, the extensive requirements for each individual set of calibration data were difficult to achieve and required several days of measurement. Through the careful selection of a small subset of data from all samples collected on the 138 study participants in a previous study, we have developed a methodology for applying a single standard calibration to multiple persons. The standard calibrations have been applied to a plurality of individuals and shown to be persistent over periods greater than 24 weeks.
da Silva, Neirivaldo Cavalcante; Honorato, Ricardo Saldanha; Pimentel, Maria Fernanda; Garrigues, Salvador; Cervera, Maria Luisa; de la Guardia, Miguel
2015-09-01
There is an increasing demand for herbal medicines in weight loss treatment. Some synthetic chemicals, such as sibutramine (SB), have been detected as adulterants in herbal formulations. In this study, two strategies using near infrared (NIR) spectroscopy have been developed to evaluate potential adulteration of herbal medicines with SB: a qualitative screening approach and a quantitative methodology based on multivariate calibration. Samples were composed by products commercialized as herbal medicines, as well as by laboratory adulterated samples. Spectra were obtained in the range of 14,000-4000 per cm. Using PLS-DA, a correct classification of 100% was achieved for the external validation set. In the quantitative approach, the root mean squares error of prediction (RMSEP), for both PLS and MLR models, was 0.2% w/w. The results prove the potential of NIR spectroscopy and multivariate calibration in quantifying sibutramine in adulterated herbal medicines samples. © 2015 American Academy of Forensic Sciences.
Lin, Yiqing; Li, Weiyong; Xu, Jin; Boulas, Pierre
2015-07-05
The aim of this study is to develop an at-line near infrared (NIR) method for the rapid and simultaneous determination of four structurally similar active pharmaceutical ingredients (APIs) in powder blends intended for the manufacturing of tablets. Two of the four APIs in the formula are present in relatively small amounts, one at 0.95% and the other at 0.57%. Such small amounts in addition to the similarity in structures add significant complexity to the blend uniformity analysis. The NIR method is developed using spectra from six laboratory-created calibration samples augmented by a small set of spectra from a large-scale blending sample. Applying the quality by design (QbD) principles, the calibration design included concentration variations of the four APIs and a main excipient, microcrystalline cellulose. A bench-top FT-NIR instrument was used to acquire the spectra. The obtained NIR spectra were analyzed by applying principal component analysis (PCA) before calibration model development. Score patterns from the PCA were analyzed to reveal relationship between latent variables and concentration variations of the APIs. In calibration model development, both PLS-1 and PLS-2 models were created and evaluated for their effectiveness in predicting API concentrations in the blending samples. The final NIR method shows satisfactory specificity and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Sampling strategies to improve passive optical remote sensing of river bathymetry
Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.
2018-01-01
Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.
Kunz, Matthew Ross; Ottaway, Joshua; Kalivas, John H; Georgiou, Constantinos A; Mousdis, George A
2011-02-23
Detecting and quantifying extra virgin olive adulteration is of great importance to the olive oil industry. Many spectroscopic methods in conjunction with multivariate analysis have been used to solve these issues. However, successes to date are limited as calibration models are built to a specific set of geographical regions, growing seasons, cultivars, and oil extraction methods (the composite primary condition). Samples from new geographical regions, growing seasons, etc. (secondary conditions) are not always correctly predicted by the primary model due to different olive oil and/or adulterant compositions stemming from secondary conditions not matching the primary conditions. Three Tikhonov regularization (TR) variants are used in this paper to allow adulterant (sunflower oil) concentration predictions in samples from geographical regions not part of the original primary calibration domain. Of the three TR variants, ridge regression with an additional 2-norm penalty provides the smallest validation sample prediction errors. Although the paper reports on using TR for model updating to predict adulterant oil concentration, the methods should also be applicable to updating models distinguishing adulterated samples from pure extra virgin olive oil. Additionally, the approaches are general and can be used with other spectroscopic methods and adulterants as well as with other agriculture products.
NASA Astrophysics Data System (ADS)
Zhang, Chi; Reufer, Mathias; Gaudino, Danila; Scheffold, Frank
2017-11-01
Diffusing wave spectroscopy (DWS) can be employed as an optical rheology tool with numerous applications for studying the structure, dynamics and linear viscoelastic properties of complex fluids, foams, glasses and gels. To carry out DWS measurements, one first needs to quantify the static optical properties of the sample under investigation, i.e. the transport mean free path l * and the absorption length l a. In the absence of absorption this can be done by comparing the diffuse optical transmission to a calibration sample whose l * is known. Performing this comparison however is cumbersome, time consuming, and prone to mistakes by the operator. Moreover, already weak absorption can lead to significant errors. In this paper, we demonstrate the implementation of an automatized approach, based on which the DWS measurement procedure can be simplified significantly. By comparison with a comprehensive set of calibration measurements we cover the entire parameter space relating measured count rates ( CR t , CR b ) to ( l *, l a). Based on this approach we can determine l * and la of an unknown sample accurately thus making the additional measurement of a calibration sample obsolete. We illustrate the use of this approach by monitoring the coarsening of a commercially available shaving foam with DWS.
An Enclosed Laser Calibration Standard
NASA Astrophysics Data System (ADS)
Adams, Thomas E.; Fecteau, M. L.
1985-02-01
We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Li, Feng; Li, Wen-Xia; Zhao, Guo-Liang; Tang, Shi-Jun; Li, Xue-Jiao; Wu, Hong-Mei
2014-10-01
A series of 354 polyester-cotton blend fabrics were studied by the near-infrared spectra (NIRS) technology, and a NIR qualitative analysis model for different spectral characteristics was established by partial least squares (PLS) method combined with qualitative identification coefficient. There were two types of spectrum for dying polyester-cotton blend fabrics: normal spectrum and slash spectrum. The slash spectrum loses its spectral characteristics, which are effected by the samples' dyes, pigments, matting agents and other chemical additives. It was in low recognition rate when the model was established by the total sample set, so the samples were divided into two types of sets: normal spectrum sample set and slash spectrum sample set, and two NIR qualitative analysis models were established respectively. After the of models were established the model's spectral region, pretreatment methods and factors were optimized based on the validation results, and the robustness and reliability of the model can be improved lately. The results showed that the model recognition rate was improved greatly when they were established respectively, the recognition rate reached up to 99% when the two models were verified by the internal validation. RC (relation coefficient of calibration) values of the normal spectrum model and slash spectrum model were 0.991 and 0.991 respectively, RP (relation coefficient of prediction) values of them were 0.983 and 0.984 respectively, SEC (standard error of calibration) values of them were 0.887 and 0.453 respectively, SEP (standard error of prediction) values of them were 1.131 and 0.573 respectively. A series of 150 bounds samples reached used to verify the normal spectrum model and slash spectrum model and the recognition rate reached up to 91.33% and 88.00% respectively. It showed that the NIR qualitative analysis model can be used for identification in the recycle site for the polyester-cotton blend fabrics.
Hoffmann, Uwe; Pfeifer, Frank; Hsuing, Chang; Siesler, Heinz W
2016-05-01
The aim of this contribution is to demonstrate the transfer of spectra that have been measured on two different laboratory Fourier transform near-infrared (FT-NIR) spectrometers to the format of a handheld instrument by measuring only a few samples with both spectrometer types. Thus, despite the extreme differences in spectral range and resolution, spectral data sets that have been collected and quantitative as well as qualitative calibrations that have been developed thereof, respectively, over a long period on a laboratory instrument can be conveniently transferred to the handheld system. Thus, the necessity to prepare completely new calibration samples and the effort required to develop calibration models when changing hardware platforms is minimized. The enabling procedure is based on piecewise direct standardization (PDS) and will be described for the data sets of a quantitative and a qualitative application case study. For this purpose the spectra measured on the FT-NIR laboratory spectrometers were used as "master" data and transferred to the "target" format of the handheld instrument. The quantitative test study refers to transmission spectra of three-component liquid solvent mixtures whereas the qualitative application example encompasses diffuse reflection spectra of six different current polymers. To prove the performance of the transfer procedure for quantitative applications, partial least squares (PLS-1) calibrations were developed for the individual components of the solvent mixtures with spectra transferred from the master to the target instrument and the cross-validation parameters were compared with the corresponding parameters obtained for spectra measured on the master and target instruments, respectively. To test the retention of the discrimination ability of the transferred polymer spectra sets principal component analyses (PCAs) were applied exemplarily for three of the six investigated polymers and their identification was demonstrated by Mahalanobis distance plots for all polymers. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuehne, David Patrick; Lattin, Rebecca Renee
The Rad-NESHAP program, part of the Air Quality Compliance team of LANL’s Compliance Programs group (EPC-CP), and the Radiation Instrumentation & Calibration team, part of the Radiation Protection Services group (RP-SVS), frequently partner on issues relating to characterizing air flow streams. This memo documents the most recent example of this partnership, involving performance testing of sulfur hexafluoride detectors for use in stack gas mixing tests. Additionally, members of the Rad-NESHAP program performed a functional trending test on a pair of optical particle counters, comparing results from a non-calibrated instrument to a calibrated instrument. Prior to commissioning a new stack samplingmore » system, the ANSI Standard for stack sampling requires that the stack sample location must meet several criteria, including uniformity of tracer gas and aerosol mixing in the air stream. For these mix tests, tracer media (sulfur hexafluoride gas or liquid oil aerosol particles) are injected into the stack air stream and the resulting air concentrations are measured across the plane of the stack at the proposed sampling location. The coefficient of variation of these media concentrations must be under 20% when evaluated over the central 2/3 area of the stack or duct. The instruments which measure these air concentrations must be tested prior to the stack tests in order to ensure their linear response to varying air concentrations of either tracer gas or tracer aerosol. The instruments used in tracer gas and aerosol mix testing cannot be calibrated by the LANL Standards and Calibration Laboratory, so they would normally be sent off-site for factory calibration by the vendor. Operational requirements can prevent formal factory calibration of some instruments after they have been used in hazardous settings, e.g., within a radiological facility with potential airborne contamination. The performance tests described in this document are intended to demonstrate the reliable performance of the test instruments for the specific tests used in stack flow characterization.« less
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.
2002-01-01
The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.
2011-01-01
Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
NASA Astrophysics Data System (ADS)
Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang
2014-11-01
In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.
Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B
2013-12-01
Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication is that by using such 'efficient' sampling methods for obtaining an initial sample of fruit that represents the variability of the population and for sub-sampling to form training and test sets it should be possible to use relatively small sample sizes to develop spectral predictions of fruit quality. Using feature selection and elastic net appears to improve the SSC model performance in terms of R(2), RMSECV and RMSEP for 'Aroma' apples. © 2013 Society of Chemical Industry.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
Near-infrared diffuse reflection systems for chlorophyll content of tomato leaves measurement
NASA Astrophysics Data System (ADS)
Jiang, Huanyu; Ying, Yibin; Lu, Huishan
2006-10-01
In this study, two measuring systems for chlorophyll content of tomato leaves were developed based on near-infrared spectral techniques. The systems mainly consists of a FT-IR spectrum analyzer, optic fiber diffuses reflection accessories and data card. Diffuse reflectance of intact tomato leaves was measured by an optics fiber optic fiber diffuses reflection accessory and a smart diffuses reflection accessory. Calibration models were developed from spectral and constituent measurements. 90 samples served as the calibration sets and 30 samples served as the validation sets. Partial least squares (PLS) and principal component regression (PCR) technique were used to develop the prediction models by different data preprocessing. The best model for chlorophyll content had a high correlation efficient of 0.9348 and a low standard error of prediction RMSEP of 4.79 when we select full range (12500-4000 cm -1), MSC path length correction method by the log(1/R). The results of this study suggest that FT-NIR method can be feasible to detect chlorophyll content of tomato leaves rapidly and nondestructively.
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Santer, Richard P.; Berthelot, Béatrice
2011-10-01
La Crau test site is used by CNES since 1987 for vicarious calibration of SPOT cameras. The former calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectances. Since 1997, au automatic photometric station (ROSAS) was set up on the site on a 10m height pole. This station measures at different wavelengths, the solar extinction and the sky radiances to fully characterize the optical properties of the atmosphere. It also measures the upwelling radiance over the ground to fully characterize the surface reflectance properties. The photometer samples the spectrum from 380nm to 1600nm with 9 narrow bands. Every non cloudy days the photometer automatically and sequentially performs its measurements. Data are transmitted by GSM (Global System for Mobile communications) to CNES and processed. The photometer is calibrated in situ over the sun for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. The data are processed by an operational software which calibrates the photometer, estimates the atmosphere properties, computes the bidirectional reflectance distribution function of the site, then simulates the top of atmosphere radiance seen by any sensor over-passing the site and calibrates it. This paper describes the instrument, its measurement protocol and its calibration principle. Calibration results are discussed and compared to laboratory calibration. It details the surface reflectance characterization and presents SPOT4 calibration results deduced from the estimated TOA radiance. The results are compared to the official calibration.
A Report on the Validation of Beryllium Strength Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Derek Elswick
2016-02-05
This report discusses work on validating beryllium strength models with flyer plate and Taylor rod experimental data. Strength models are calibrated with Hopkinson bar and quasi-static data. The Hopkinson bar data for beryllium provides strain rates up to about 4000 per second. A limitation of the Hopkinson bar data for beryllium is that it only provides information on strain up to about 0.15. The lack of high strain data at high strain rates makes it difficult to distinguish between various strength model settings. The PTW model has been calibrated many different times over the last 12 years. The lack ofmore » high strain data for high strain rates has resulted in these calibrated PTW models for beryllium exhibiting significantly different behavior when extrapolated to high strain. For beryllium, the α parameter of PTW has recently been calibrated to high precision shear modulus data. In the past the α value for beryllium was set based on expert judgment. The new α value for beryllium was used in a calibration of the beryllium PTW model by Sky Sjue. The calibration by Sjue used EOS table information to model the temperature dependence of the heat capacity. Also, the calibration by Sjue used EOS table information to model the density changes of the beryllium sample during the Hopkinson bar and quasi-static experiments. In this paper, the calibrated PTW model by Sjue is compared against experimental data and other strength models. The other strength models being considered are a PTW model calibrated by Shuh- Rong Chen and a Steinberg-Guinan type model by John Pedicini. The three strength models are used in a comparison against flyer plate and Taylor rod data. The results show that the Chen PTW model provides better agreement to this data. The Chen PTW model settings have been previously adjusted to provide a better fit to flyer plate data, whereas the Sjue PTW model has not been changed based on flyer plate data. However, the Sjue model provides a reasonable fit to flyer plate and Taylor rod data, and also gives a better match to recently analyzed Z-machine data which has a strain of about 0.35 and a strain rate of 3e5 s -1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimer, P J; Baillie, M L; Bard, E
2005-10-02
Radiocarbon calibration curves are essential for converting radiocarbon dated chronologies to the calendar timescale. Prior to the 1980's numerous differently derived calibration curves based on radiocarbon ages of known age material were in use, resulting in ''apples and oranges'' comparisons between various records (Klein et al., 1982), further complicated by until then unappreciated inter-laboratory variations (International Study Group, 1982). The solution was to produce an internationally-agreed calibration curve based on carefully screened data with updates at 4-6 year intervals (Klein et al., 1982; Stuiver and Reimer, 1986; Stuiver and Reimer, 1993; Stuiver et al., 1998). The IntCal working group hasmore » continued this tradition with the active participation of researchers who produced the records that were considered for incorporation into the current, internationally-ratified calibration curves, IntCal04, SHCal04, and Marine04, for Northern Hemisphere terrestrial, Southern Hemisphere terrestrial, and marine samples, respectively (Reimer et al., 2004; Hughen et al., 2004; McCormac et al., 2004). Fairbanks et al. (2005), accompanied by a more technical paper, Chiu et al. (2005), and an introductory comment, Adkins (2005), recently published a ''calibration curve spanning 0-50,000 years''. Fairbanks et al. (2005) and Chiu et al. (2005) have made a significant contribution to the database on which the IntCal04 and Marine04 calibration curves are based. These authors have now taken the further step to derive their own radiocarbon calibration extending to 50,000 cal BP, which they claim is superior to that generated by the IntCal working group. In their papers, these authors are strongly critical of the IntCal calibration efforts for what they claim to be inadequate screening and sample pretreatment methods. While these criticisms may ultimately be helpful in identifying a better set of protocols, we feel that there are also several erroneous and misleading statements made by these authors which require a response by the IntCal working group. Furthermore, we would like to comment on the sample selection criteria, pretreatment methods, and statistical methods utilized by Fairbanks et al. in derivation of their own radiocarbon calibration.« less
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N; Letarte, Simon; Watts, Julian D; Aebersold, Ruedi
2009-08-01
As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300-1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
ATLAS Tile calorimeter calibration and monitoring systems
NASA Astrophysics Data System (ADS)
Chomont, Arthur; ATLAS Collaboration
2017-11-01
The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.
NASA Astrophysics Data System (ADS)
Vincent, Mark B.; Chanover, Nancy J.; Beebe, Reta F.; Huber, Lyle
2005-10-01
The NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii, set aside some time on about 500 nights from 1995 to 2002, when the NSFCAM facility infrared camera was mounted and Jupiter was visible, for a standardized set of observations of Jupiter in support of the Galileo mission. The program included observations of Jupiter, nearby reference stars, and dome flats in five filters: narrowband filters centered at 1.58, 2.28, and 3.53 μm, and broader L' and M' bands that probe the atmosphere from the stratosphere to below the main cloud layer. The reference stars were not cross-calibrated against standards. We performed follow-up observations to calibrate these stars and Jupiter in 2003 and 2004. We present a summary of the calibration of the Galileo support monitoring program data set. We present calibrated magnitudes of the six most frequently observed stars, calibrated reflectivities, and brightness temperatures of Jupiter from 1995 to 2004, and a simple method of normalizing the Jovian brightness to the 2004 results. Our study indicates that the NSFCAM's zero-point magnitudes were not stable from 1995 to early 1997, and that the best Jovian calibration possible with this data set is limited to about +/-10%. The raw images and calibration data have been deposited in the Planetary Data System.
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
Burns, Jennifer B.; Riley, Christopher B.; Shaw, R. Anthony; McClure, J. Trenton
2017-01-01
The objective of this study was to develop and compare the performance of laboratory grade and portable attenuated total reflectance infrared (ATR-IR) spectroscopic approaches in combination with partial least squares regression (PLSR) for the rapid quantification of alpaca serum IgG concentration, and the identification of low IgG (<1000 mg/dL), which is consistent with the diagnosis of failure of transfer of passive immunity (FTPI) in neonates. Serum samples (n = 175) collected from privately owned, healthy alpacas were tested by the reference method of radial immunodiffusion (RID) assay, and laboratory grade and portable ATR-IR spectrometers. Various pre-processing strategies were applied to the ATR-IR spectra that were linked to corresponding RID-IgG concentrations, and then randomly split into two sets: calibration (training) and test sets. PLSR was applied to the calibration set and calibration models were developed, and the test set was used to assess the accuracy of the analytical method. For the test set, the Pearson correlation coefficients between the IgG measured by RID and predicted by both laboratory grade and portable ATR-IR spectrometers was 0.91. The average differences between reference serum IgG concentrations and the two IR-based methods were 120.5 mg/dL and 71 mg/dL for the laboratory and portable ATR-IR-based assays, respectively. Adopting an IgG concentration <1000 mg/dL as the cut-point for FTPI cases, the sensitivity, specificity, and accuracy for identifying serum samples below this cut point by laboratory ATR-IR assay were 86, 100 and 98%, respectively (within the entire data set). Corresponding values for the portable ATR-IR assay were 95, 99 and 99%, respectively. These results suggest that the two different ATR-IR assays performed similarly for rapid qualitative evaluation of alpaca serum IgG and for diagnosis of IgG <1000 mg/dL, the portable ATR-IR spectrometer performed slightly better, and provides more flexibility for potential application in the field. PMID:28651006
Elsohaby, Ibrahim; Burns, Jennifer B; Riley, Christopher B; Shaw, R Anthony; McClure, J Trenton
2017-01-01
The objective of this study was to develop and compare the performance of laboratory grade and portable attenuated total reflectance infrared (ATR-IR) spectroscopic approaches in combination with partial least squares regression (PLSR) for the rapid quantification of alpaca serum IgG concentration, and the identification of low IgG (<1000 mg/dL), which is consistent with the diagnosis of failure of transfer of passive immunity (FTPI) in neonates. Serum samples (n = 175) collected from privately owned, healthy alpacas were tested by the reference method of radial immunodiffusion (RID) assay, and laboratory grade and portable ATR-IR spectrometers. Various pre-processing strategies were applied to the ATR-IR spectra that were linked to corresponding RID-IgG concentrations, and then randomly split into two sets: calibration (training) and test sets. PLSR was applied to the calibration set and calibration models were developed, and the test set was used to assess the accuracy of the analytical method. For the test set, the Pearson correlation coefficients between the IgG measured by RID and predicted by both laboratory grade and portable ATR-IR spectrometers was 0.91. The average differences between reference serum IgG concentrations and the two IR-based methods were 120.5 mg/dL and 71 mg/dL for the laboratory and portable ATR-IR-based assays, respectively. Adopting an IgG concentration <1000 mg/dL as the cut-point for FTPI cases, the sensitivity, specificity, and accuracy for identifying serum samples below this cut point by laboratory ATR-IR assay were 86, 100 and 98%, respectively (within the entire data set). Corresponding values for the portable ATR-IR assay were 95, 99 and 99%, respectively. These results suggest that the two different ATR-IR assays performed similarly for rapid qualitative evaluation of alpaca serum IgG and for diagnosis of IgG <1000 mg/dL, the portable ATR-IR spectrometer performed slightly better, and provides more flexibility for potential application in the field.
Delivery of calibration workshops covering herbicide application equipment : final report.
DOT National Transportation Integrated Search
2014-03-31
Proper herbicide sprayer set-up and calibration are critical to the success of the Oklahoma Department of Transportation (ODOT) herbicide program. Sprayer system set-up and calibration training is provided in annual continuing education herbicide wor...
Coastal Atmosphere and Sea Time Series (CoASTS)
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; Berthon, Jean-Francoise; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; Alberotanza, Luigi; McClain, Charles R. (Technical Monitor)
2002-01-01
The Coastal Atmosphere and Sea Time Series (CoASTS) Project aimed at supporting ocean color research and applications, from 1995 up to the time of publication of this document, has ensured the collection of a comprehensive atmospheric and marine data set from an oceanographic tower located in the northern Adriatic Sea. The instruments and the measurement methodologies used to gather quantities relevant for bio-optical modeling and for the calibration and validation of ocean color sensors, are described. Particular emphasis is placed on four items: (1) the evaluation of perturbation effects in radiometric data (i.e., tower-shading, instrument self-shading, and bottom effects); (2) the intercomparison of seawater absorption coefficients from in situ measurements and from laboratory spectrometric analysis on discrete samples; (3) the intercomparison of two filter techniques for in vivo measurement of particulate absorption coefficients; and (4) the analysis of repeatability and reproducibility of the most relevant laboratory measurements carried out on seawater samples (i.e., particulate and yellow substance absorption coefficients, and pigment and total suspended matter concentrations). Sample data are also presented and discussed to illustrate the typical features characterizing the CoASTS measurement site in view of supporting the suitability of the CoASTS data set for bio-optical modeling and ocean color calibration and validation.
Nuclear sensor signal processing circuit
Kallenbach, Gene A [Bosque Farms, NM; Noda, Frank T [Albuquerque, NM; Mitchell, Dean J [Tijeras, NM; Etzkin, Joshua L [Albuquerque, NM
2007-02-20
An apparatus and method are disclosed for a compact and temperature-insensitive nuclear sensor that can be calibrated with a non-hazardous radioactive sample. The nuclear sensor includes a gamma ray sensor that generates tail pulses from radioactive samples. An analog conditioning circuit conditions the tail-pulse signals from the gamma ray sensor, and a tail-pulse simulator circuit generates a plurality of simulated tail-pulse signals. A computer system processes the tail pulses from the gamma ray sensor and the simulated tail pulses from the tail-pulse simulator circuit. The nuclear sensor is calibrated under the control of the computer. The offset is adjusted using the simulated tail pulses. Since the offset is set to zero or near zero, the sensor gain can be adjusted with a non-hazardous radioactive source such as, for example, naturally occurring radiation and potassium chloride.
Arsenyev, P A; Trezvov, V V; Saratovskaya, N V
1997-01-01
This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.
Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D
2013-09-01
The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.
Green, R.O.; Pieters, C.; Mouroulis, P.; Eastwood, M.; Boardman, J.; Glavich, T.; Isaacson, P.; Annadurai, M.; Besse, S.; Barr, D.; Buratti, B.; Cate, D.; Chatterjee, A.; Clark, R.; Cheek, L.; Combe, J.; Dhingra, D.; Essandoh, V.; Geier, S.; Goswami, J.N.; Green, R.; Haemmerle, V.; Head, J.; Hovland, L.; Hyman, S.; Klima, R.; Koch, T.; Kramer, G.; Kumar, A.S.K.; Lee, Kenneth; Lundeen, S.; Malaret, E.; McCord, T.; McLaughlin, S.; Mustard, J.; Nettles, J.; Petro, N.; Plourde, K.; Racho, C.; Rodriquez, J.; Runyon, C.; Sellar, G.; Smith, C.; Sobel, H.; Staid, M.; Sunshine, J.; Taylor, L.; Thaisen, K.; Tompkins, S.; Tseng, H.; Vane, G.; Varanasi, P.; White, M.; Wilson, D.
2011-01-01
The NASA Discovery Moon Mineralogy Mapper imaging spectrometer was selected to pursue a wide range of science objectives requiring measurement of composition at fine spatial scales over the full lunar surface. To pursue these objectives, a broad spectral range imaging spectrometer with high uniformity and high signal-to-noise ratio capable of measuring compositionally diagnostic spectral absorption features from a wide variety of known and possible lunar materials was required. For this purpose the Moon Mineralogy Mapper imaging spectrometer was designed and developed that measures the spectral range from 430 to 3000 nm with 10 nm spectral sampling through a 24 degree field of view with 0.7 milliradian spatial sampling. The instrument has a signal-to-noise ratio of greater than 400 for the specified equatorial reference radiance and greater than 100 for the polar reference radiance. The spectral cross-track uniformity is >90% and spectral instantaneous field-of-view uniformity is >90%. The Moon Mineralogy Mapper was launched on Chandrayaan-1 on the 22nd of October. On the 18th of November 2008 the Moon Mineralogy Mapper was turned on and collected a first light data set within 24 h. During this early checkout period and throughout the mission the spacecraft thermal environment and orbital parameters varied more than expected and placed operational and data quality constraints on the measurements. On the 29th of August 2009, spacecraft communication was lost. Over the course of the flight mission 1542 downlinked data sets were acquired that provide coverage of more than 95% of the lunar surface. An end-to-end science data calibration system was developed and all measurements have been passed through this system and delivered to the Planetary Data System (PDS.NASA.GOV). An extensive effort has been undertaken by the science team to validate the Moon Mineralogy Mapper science measurements in the context of the mission objectives. A focused spectral, radiometric, spatial, and uniformity validation effort has been pursued with selected data sets including an Earth-view data set. With this effort an initial validation of the on-orbit performance of the imaging spectrometer has been achieved, including validation of the cross-track spectral uniformity and spectral instantaneous field of view uniformity. The Moon Mineralogy Mapper is the first imaging spectrometer to measure a data set of this kind at the Moon. These calibrated science measurements are being used to address the full set of science goals and objectives for this mission. Copyright 2011 by the American Geophysical Union.
C. Pieters,; P. Mouroulis,; M. Eastwood,; J. Boardman,; Green, R.O.; Glavich, T.; Isaacson, P.; Annadurai, M.; Besse, S.; Cate, D.; Chatterjee, A.; Clark, R.; Barr, D.; Cheek, L.; Combe, J.; Dhingra, D.; Essandoh, V.; Geier, S.; Goswami, J.N.; Green, R.; Haemmerle, V.; Head, J.; Hovland, L.; Hyman, S.; Klima, R.; Koch, T.; Kramer, G.; Kumar, A.S.K.; Lee, K.; Lundeen, S.; Malaret, E.; McCord, T.; McLaughlin, S.; Mustard, J.; Nettles, J.; Petro, N.; Plourde, K.; Racho, C.; Rodriguez, J.; Runyon, C.; Sellar, G.; Smith, C.; Sobel, H.; Staid, M.; Sunshine, J.; Taylor, L.; Thaisen, K.; Tompkins, S.; Tseng, H.; Vane, G.; Varanasi, P.; White, M.; Wilson, D.
2011-01-01
The NASA Discovery Moon Mineralogy Mapper imaging spectrometer was selected to pursue a wide range of science objectives requiring measurement of composition at fine spatial scales over the full lunar surface. To pursue these objectives, a broad spectral range imaging spectrometer with high uniformity and high signal-to-noise ratio capable of measuring compositionally diagnostic spectral absorption features from a wide variety of known and possible lunar materials was required. For this purpose the Moon Mineralogy Mapper imaging spectrometer was designed and developed that measures the spectral range from 430 to 3000 nm with 10 nm spectral sampling through a 24 degree field of view with 0.7 milliradian spatial sampling. The instrument has a signal-to-noise ratio of greater than 400 for the specified equatorial reference radiance and greater than 100 for the polar reference radiance. The spectral cross-track uniformity is >90% and spectral instantaneous field-of-view uniformity is >90%. The Moon Mineralogy Mapper was launched on Chandrayaan-1 on the 22nd of October. On the 18th of November 2008 the Moon Mineralogy Mapper was turned on and collected a first light data set within 24 h. During this early checkout period and throughout the mission the spacecraft thermal environment and orbital parameters varied more than expected and placed operational and data quality constraints on the measurements. On the 29th of August 2009, spacecraft communication was lost. Over the course of the flight mission 1542 downlinked data sets were acquired that provide coverage of more than 95% of the lunar surface. An end-to-end science data calibration system was developed and all measurements have been passed through this system and delivered to the Planetary Data System (PDS.NASA.GOV). An extensive effort has been undertaken by the science team to validate the Moon Mineralogy Mapper science measurements in the context of the mission objectives. A focused spectral, radiometric, spatial, and uniformity validation effort has been pursued with selected data sets including an Earth-view data set. With this effort an initial validation of the on-orbit performance of the imaging spectrometer has been achieved, including validation of the cross-track spectral uniformity and spectral instantaneous field of view uniformity. The Moon Mineralogy Mapper is the first imaging spectrometer to measure a data set of this kind at the Moon. These calibrated science measurements are being used to address the full set of science goals and objectives for this mission.
Calibration of a Ti-in-muscovite geothermometer for ilmenite- and Al2SiO5-bearing metapelites
NASA Astrophysics Data System (ADS)
Wu, Chun-Ming; Chen, Hong-Xu
2015-01-01
The Ti-in-muscovite geothermometer was empirically calibrated as ln[T(oC)] = 7.258 + 0.289 ln(Ti) + 0.158[Mg/(Fe + Mg)] + 0.031 ln[P(kbar)] using ilmenite- and Al2SiO5-bearing assemblages in metapelites under P-T conditions of 450-800 °C and 0.1-1.4 GPa. The calibration was conducted for muscovites containing Ti = 0.01-0.07, Fe = 0.03-0.16, Mg = 0.01-0.32 and Mg/(Fe + Mg) = 0.05-0.73, respectively, on the basis of 11 oxygen per formula unit. Such compositional range covers more than 90% natural muscovites, and the random error of this thermometer is estimated to be of ± 65 °C. The geothermometer was validated against a set of independently determined temperature conditions between different degrees in samples from different prograde, inverted and contact metamorphic terranes. Application of this thermometer beyond the calibration conditions is not encouraged.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming
2003-01-01
This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
St Jacques, J.; Cumming, B. F.; Sauchyn, D.; Vanstone, J. R.; Dickenson, J.; Smol, J. P.
2013-12-01
A vital component of paleoclimatology is the validation of paleoclimatological reconstructions. Unfortunately, there is scant instrumental data prior to the 20th century available for this. Hence, typically, we can only do long-term validation using other proxy-inferred climate reconstructions. Minnesota, USA, with its long military fort climate records beginning in 1820 and early dense network of climate stations, offers a rare opportunity for proxy validation. We compare a high-resolution (4-year), millennium-scale, pollen-inferred paleoclimate record derived from varved Lake Mina in central Minnesota to early military fort records and dendroclimatological records. When inferring a paleoclimate record from a pollen record, we rely upon the pollen-climate relationship being constant in time. However, massive human impacts have significantly altered vegetation; and the relationship between modern instrumental climate data and the modern pollen rain becomes altered from what it was in the past. In the Midwest, selective logging, fire suppression, deforestation and agriculture have strongly influenced the modern pollen rain since Euro-American settlement in the mid-1800s. We assess the signal distortion introduced by using the conventional method of modern post-settlement pollen and climate calibration sets to infer climate at Lake Mina from pre-settlement pollen data. Our first February and May temperature reconstructions are based on a pollen dataset contemporaneous with early settlement to which corresponding climate data from the earliest instrumental records has been added to produce a 'pre-settlement' calibration set. The second February and May temperature reconstructions are based on a conventional 'modern' pollen-climate dataset from core-top pollen samples and modern climate normals. The temperature reconstructions are then compared to the earliest instrumental records from Fort Snelling, Minnesota, and it is shown that the reconstructions based on the pre-settlement calibration set give much more credible reconstructions. We then compare the temperature reconstructions based upon the two calibration sets for AD 1116-2002. Significant signal flattening and bias exist when using the conventional modern pollen-climate calibration set rather than the pre-settlement pollen-climate calibration set, resulting in an overestimation of Little Ice Age monthly mean temperatures of 0.5-1.5 oC. Therefore, regional warming from anthropogenic global warming is significantly underestimated when using the conventional method of building pollen-climate calibration sets. We also compare the Lake Mina pollen-inferred effective moisture record to early 19th century climate data and to a four-century tree-ring inferred moisture reconstruction based upon sites in Minnesota and the Dakotas. This comparison shows that regional tree-ring reconstructions are biased towards dry conditions and record wet periods poorly relative to high-resolution pollen reconstructions, giving a false impression of regional aridity. It also suggests that varve chronologies should be based upon cross-dating to ensure a more accurate chronology.
Li, Wen-xia; Li, Feng; Zhao, Guo-liang; Tang, Shi-jun; Liu, Xiao-ying
2014-12-01
A series of 376 cotton-polyester (PET) blend fabrics were studied by a portable near-infrared (NIR) spectrometer. A NIR semi-quantitative-qualitative calibration model was established by Partial Least Squares (PLS) method combined with qualitative identification coefficient. In this process, PLS method in a quantitative analysis was used as a correction method, and the qualitative identification coefficient was set by the content of cotton and polyester in blend fabrics. Cotton-polyester blend fabrics were identified qualitatively by the model and their relative contents were obtained quantitatively, the model can be used for semi-quantitative identification analysis. In the course of establishing the model, the noise and baseline drift of the spectra were eliminated by Savitzky-Golay(S-G) derivative. The influence of waveband selection and different pre-processing method was also studied in the qualitative calibration model. The major absorption bands of 100% cotton samples were in the 1400~1600 nm region, and the one for 100% polyester were around 1600~1800 nm, the absorption intensity was enhancing with the content increasing of cotton or polyester. Therefore, the cotton-polyester's major absorption region was selected as the base waveband, the optimal waveband (1100~2500 nm) was found by expanding the waveband in two directions (the correlation coefficient was 0.6, and wave-point number was 934). The validation samples were predicted by the calibration model, the results showed that the model evaluation parameters was optimum in the 1100~2500 nm region, and the combination of S-G derivative, multiplicative scatter correction (MSC) and mean centering was used as the pre-processing method. RC (relational coefficient of calibration) value was 0.978, RP (relational coefficient of prediction) value was 0.940, SEC (standard error of calibration) value was 1.264, SEP (standard error of prediction) value was 1.590, and the sample's recognition accuracy was up to 93.4%. It showed that the cotton-polyester blend fabrics could be predicted by the semi-quantitative-qualitative calibration model.
Spectrometric Estimation of Total Nitrogen Concentration in Douglas-Fir Foliage
NASA Technical Reports Server (NTRS)
Johnson, Lee F.; Billow, Christine R.; Peterson, David L. (Technical Monitor)
1995-01-01
Spectral measurements of fresh and dehydrated Douglas-fir foliage, from trees cultivated under three fertilization treatments, were acquired with a laboratory spectrophotometer. The slope (first-derivative) of the fresh- and dry-leaf absorbance spectra at locations near known protein absorption features was strongly correlated with total nitrogen (TN) concentration of the foliage samples. Particularly strong correlation was observed between the first-derivative spectra in the 2150-2170 nm region and TN, reaching a local maximum in the fresh-leaf spectra of -0.84 at 2 160 nm. Stepwise regression was used to generate calibration equations relating first derivative spectra from fresh, dry/intact, and dry/ground samples to TN concentration. Standard errors of calibration were 1.52 mg g-1 (fresh), 1.33 (dry/intact), and 1.20 (dry/ground), with goodness-of-fit 0.94 and greater. Cross-validation was performed with the fresh-leaf dataset to examine the predictive capability of the regression method; standard errors of prediction ranged from 1.47 - 2.37 mg g(exp -1) across seven different validation sets, prediction goodness of fit ranged from .85-.94, and wavelength selection was fairly insensitive to the membership of the calibration set. All regressions in this study tended to select wavelengths in the 2100-2350 nm region, with the primary selection in the 2142-2172 nm region. The study provides positive evidence concerning the feasibility of assessing TN status of fresh-leaf samples by spectrometric means. We assert that the ability to extract biochemical information from fresh-leaf spectra is a necessary but insufficient condition regarding the use of remote sensing for canopy-level biochemical estimation.
de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio
2011-08-05
The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Anderson, V. J.; Shanahan, T. M.; Saylor, J.; Horton, B. K.
2012-12-01
Recently, the distribution of branched GDGT's (glycerol dialkyl glycerol tetraethers) has been proposed as a proxy for temperature and pH in soils via the MBT/CBT index, and has been used to reconstruct past temperature variations in a number of settings ranging from marine sediments to loess deposits and paleosols. However, empirical calibrations of the MBT/CBT index against temperature show significant scatter, leading to uncertainties as large as ±2 degrees C . In this study we seek to add to and improve upon the existing soil calibration using a new set of samples spanning a large elevation (and temperature) gradient in the Eastern Cordillera of Colombia. At each site we buried temperature loggers to constrain the diurnal and seasonal temperature experienced by each soil sample. Located only 5 degrees north of the equator, our sites experience a very small seasonal temperature variation - most sites display an annual range of less than 4 degrees C. In addition, the pH of all of the soils is almost invariant across the transect, with the vast majority of samples having pH's between 4 and 5. This dataset represents a "best-case" scenario - small variations in seasonal temperature, pH, and well-constrained instrumental data - which allow us to examine the brGDGT-temperature relationship in the absence of major confounding factors such as seasonality and soil chemistry. Interestingly, the relationship between temperature and the MBT/CBT index is not improved using this dataset, suggesting that these factors are not the cause of the anomalous scatter in the calibration dataset. However, we find that using other parameterizations for the regression equation instead of the MBT and CBT indices, the errors in our temperature estimates are significantly reduced.
NASA Astrophysics Data System (ADS)
Wilson, Adam A.
The ability to measure thermal properties of thin films and nanostructured materials is an important aspect of many fields of academic study. A strategy especially well-suited for nanoscale investigations of these properties is the scanning hot probe technique, which is unique in its ability to non-destructively interrogate the thermal properties with high resolution, both laterally as well as through the thickness of the material. Strategies to quantitatively determine sample thermal conductivity depend on probe calibration. State of the art calibration strategies assume that the area of thermal exchange between probe and sample does not vary with sample thermal conductivity. However, little investigation has gone into determining whether or not that assumption is valid. This dissertation provides a rigorous study into the probe-to-sample heat transfer through the air gap at diffusive distances for a variety of values of sample thermal conductivity. It is demonstrated that the thermal exchange radius and gap/contact thermal resistance varies with sample thermal conductivity as well as tip-to-sample clearance in non-contact mode. In contact mode, it is demonstrated that higher thermal conductivity samples lead to a reduction in thermal exchange radius for Wollaston probe tips. Conversely, in non-contact mode and in contact mode for sharper probe tips where air contributes the most to probe-to-sample heat transfer, the opposite trend occurs. This may be attributed to the relatively strong solid-to-solid conduction occurring between probe and sample for the Wollaston probes. A three-dimensional finite element (3DFE) model was developed to investigate how the calibrated thermal exchange parameters vary with sample thermal conductivity when calibrating the probe via the intersection method in non-contact mode at diffusive distances. The 3DFE model was then used to explore the limits of sensitivity of the experiment for a range of simulated experimental conditions. It is determined that, when operating the scanning hot probe technique in air at standard temperature and pressure using Wollaston probes, the technique is capable of measuring, within 20% uncertainty, samples with values of thermal conductivity up to 10 Wm-1K-1 in contact mode and up to 2 Wm-1K-1 in non-contact mode. By increasing the thermal conductivity of the probe's surroundings (i.e. changing air to a more conductive gas), sensitivity in non-contact mode to sample thermal conductivity is improved, which suggests potential for future investigations using non-contact scanning hot probe to measure thermal conductivity of higher thermal conductivity samples. The ability of the technique to differentiate thin films from the substrate is investigated, and the sensitivity of the technique to thin films and samples with anisotropic properties is explored. The models (both analytical and finite element) developed and reported in this dissertation lead to the ability to measure samples which, by the standard procedure before this work, were unable to be accurately measured. While other techniques failed to be able to successfully interrogate the film thermal conductivity of a full set of double-wall carbon nanotubes infused into polymers, the methods developed in this work allowed non-contact scanning hot probe measurements to be successfully performed to obtain the film thermal conductivity for each sample. Finite element simulations accounting for the anisotropy of these thin film on sample materials show similar trends with independently measured in-plane thermal conductivity for the only two (of five) samples in the set which were successfully able to be measured by the independent technique. Investigations in contact mode with high resolution Pd probes, whose probe-to-sample clearance is difficult to control in a repeatable fashion, show that surface roughness affects the thermal contact resistance. This can lead to values of reported sample thermal conductivity which are misleading, when using the standard calibrated thermal exchange parameters on samples with significantly different surface roughness than the calibration samples. This affect was taken into account to report sample thermal conductivity of Bi2Te3 nanoflakes.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.; Petry, P.
2016-11-01
Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Pérez Antón, Ana; Ramos, Álvaro García; Del Nogal Sánchez, Miguel; Pavón, José Luis Pérez; Cordero, Bernardo Moreno; Pozas, Ángel Pedro Crisolino
2016-07-01
We propose a new method for the rapid determination of five volatile compounds described in the literature as possible biomarkers of lung cancer in urine samples. The method is based on the coupling of a headspace sampler, a programmed temperature vaporizer in solvent-vent injection mode, and a mass spectrometer (HS-PTV-MS). This configuration is known as an electronic nose based on mass spectrometry. Once the method was developed, it was used for the analysis of urine samples from lung cancer patients and healthy individuals. Multivariate calibration models were employed to quantify the biomarker concentrations in the samples. The detection limits ranged between 0.16 and 21 μg/L. For the assignment of the samples to the patient group or the healthy individuals, the Wilcoxon signed-rank test was used, comparing the concentrations obtained with the median of a reference set of healthy individuals. To date, this is the first time that multivariate calibration and non-parametric methods have been combined to classify biological samples from profile signals obtained with an electronic nose. When significant differences in the concentration of one or more biomarkers were found with respect to the reference set, the sample is considered as a positive one and a new analysis was performed using a chromatographic method (HS-PTV-GC/MS) to confirm the result. The main advantage of the proposed HS-PTV-MS methodology is that no prior chromatographic separation and no sample manipulation are required, which allows an increase of the number of samples analyzed per hour and restricts the use of time-consuming techniques to only when necessary. Graphical abstract Schematic diagram of the developed methodology.
Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe
2017-10-17
Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.
Flow through electrode with automated calibration
Szecsody, James E [Richland, WA; Williams, Mark D [Richland, WA; Vermeul, Vince R [Richland, WA
2002-08-20
The present invention is an improved automated flow through electrode liquid monitoring system. The automated system has a sample inlet to a sample pump, a sample outlet from the sample pump to at least one flow through electrode with a waste port. At least one computer controls the sample pump and records data from the at least one flow through electrode for a liquid sample. The improvement relies upon (a) at least one source of a calibration sample connected to (b) an injection valve connected to said sample outlet and connected to said source, said injection valve further connected to said at least one flow through electrode, wherein said injection valve is controlled by said computer to select between said liquid sample or said calibration sample. Advantages include improved accuracy because of more frequent calibrations, no additional labor for calibration, no need to remove the flow through electrode(s), and minimal interruption of sampling.
Benthic Foraminifera Clumped Isotope Calibration
NASA Astrophysics Data System (ADS)
Piasecki, A.; Marchitto, T. M., Jr.; Bernasconi, S. M.; Grauel, A. L.; Tisserand, A. A.; Meckler, N.
2017-12-01
Due to the widespread spatial and temporal distribution of benthic foraminifera within ocean sediments, they are a commonly used for reconstructing past ocean temperatures and environmental conditions. Many foraminifera-based proxies, however, require calibration schemes that are species specific, which becomes complicated in deep time due to extinct species. Furthermore, calibrations often depend on seawater chemistry being stable and/or constrained, which is not always the case over significant climate state changes like the Eocene Oligocene Transition. Here we study the effect of varying benthic foraminifera species using the clumped isotope proxy for temperature. The benefit of this proxy is that it is independent of seawater chemistry, whereas the downside is that it requires a relatively large sample amounts. Due to recent advancements in sample processing that reduce the sample weight by a factor of 10, clumped isotopes can now be applied to a range paleoceanographic questions. First however, we need to prove that, unlike for other proxies, there are no interspecies differences with clumped isotopes, as is predicted by first principles modeling. We used a range of surface sediment samples covering a temperature range of 1-20°C from the Pacific, Mediterranean, Bahamas, and the Atlantic, and measured the clumped isotope composition of 11 different species of benthic foraminifera. We find that there are indeed no discernible species-specific differences within the sample set. In addition, the samples have the same temperature response to the proxy as inorganic carbonate samples over the same temperature range. As a result, we can now apply this proxy to a wide range of samples and foraminifera species from different ocean basins with different ocean chemistry and be confident that observed signals reflect variations in temperature.
Criado-García, Laura; Garrido-Delgado, Rocío; Arce, Lourdes; Valcárcel, Miguel
2013-07-15
An UV-Ion Mobility Spectrometer is a simple, rapid, inexpensive instrument widely used in environmental analysis among other fields. The advantageous features of its underlying technology can be of great help towards developing reliable, economical methods for determining gaseous compounds from gaseous, liquid and solid samples. Developing an effective method using UV-Ion Mobility Spectrometry (UV-IMS) to determine volatile analytes entails using appropriate gaseous standards for calibrating the spectrometer. In this work, two home-made sample introduction systems (SISs) and a commercial gas generator were used to obtain such gaseous standards. The first home-made SIS used was a static head-space to measure compounds present in liquid samples and the other home-made system was an exponential dilution set-up to measure compounds present in gaseous samples. Gaseous compounds generated by each method were determined on-line by UV-IMS. Target analytes chosen for this comparative study were ethanol, acetone, benzene, toluene, ethylbenzene and xylene isomers. The different alternatives were acceptable in terms of sensitivity, precision and selectivity. Copyright © 2013 Elsevier B.V. All rights reserved.
Mg/Ca temperature calibration for the benthic foraminifers Bulimina inflata and Bulimina mexicana
NASA Astrophysics Data System (ADS)
Grunert, Patrick; Rosenthal, Yair; Jorissen, Frans; Holbourn, Ann
2016-04-01
Bulimina inflata Seguenza 1862 and Bulimina mexicana Cushman 1922 are cosmopolitan, shallow infaunal benthic foraminifers which are common in the fossil record throughout the Neogene and Quaternary. The closely related species share a similar costate shell morphology that differs in the presence or absence of an apical spine. In the present study, we evaluate the temperature dependency of Mg/Ca ratios of these species from an extensive set of core-top samples from the Atlantic and Pacific oceans. The results show no significant offset in Mg/Ca values between B. inflata, B. mexicana, and two other costate morphospecies when present in the same sample. The apparent lack of significant inter-specific/inter-morphotype differences amongst the analysed costate buliminds allows for the combined use of their data-sets for our core-top calibration. Over a bottom-water temperature range of 3-14°C, the Bulimina inflata/mexicana group shows a sensitivity of ˜0.12 mmol/mol/°C which is comparable to the epifaunal Cibicidoides pachyderma and higher than for the shallow infaunal Uvigerina spp., the most commonly used taxa in Mg/Ca-based palaeotemperature reconstruction. B. inflata and B. mexicana might thus be a valuable alternative in mesotrophic settings where many of the commonly used species are diminished or absent, and particularly useful in hypoxic settings where costate buliminds may dominate foraminiferal assemblages. This study was financially supported by the Max-Kade-Foundation and contributes to project P25831-N29 of the Austrian Science Fund (FWF).
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...
2018-02-13
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
Using modern analogues to reconstruct past landcover
NASA Astrophysics Data System (ADS)
Brewer, Simon
2016-04-01
The physical cover of the earth plays an important role in the earth system. It affects the climate through feedbacks such as albedo and surface roughness, forms part of the carbon cycle as both sink and source and is both affected by and can affect human societies. Reconstructing past changes in land use and land cover helps to understand how these interactions may have changed over time, and provides important boundary conditions for paleoclimate models. Pollen assemblages, extracted from sedimentary sequences, provide one of the most abundant sources of information about past changes in land cover over the Holocene period. However, the relationship between plant cover and sedimentary pollen abundance is complex and non-linear, being affected by differential dispersal, production and taxonomic resolution. One method to correct for this and provide quantified estimates of past land cover is to calibrate modern pollen assemblages against contemporary remotely sensed estimates of land cover. Results will be presented from developing such a calibration for a set of European modern pollen samples and AVHRR-based tree cover estimates. An emphasis will be placed on the output of validation tests of the calibration, and what this indicates for the predictive skill of this approach. The calibration will then be applied to a set of pollen sequences for the European continent for the past 11,000 years, and the patterns of reconstructed land cover will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
Gas sampling system for reactive gas-solid mixtures
Daum, Edward D.; Downs, William; Jankura, Bryan J.; McCoury, Jr., John M.
1989-01-01
An apparatus and method for sampling a gas containing a reactive particulate solid phase flowing through a duct and for communicating a representative sample to a gas analyzer. A sample probe sheath 32 with an angular opening 34 extends vertically into a sample gas duct 30. The angular opening 34 is opposite the gas flow. A gas sampling probe 36 concentrically located within sheath 32 along with calibration probe 40 partly extend in the sheath 32. Calibration probe 40 extends further in the sheath 32 than gas sampling probe 36 for purging the probe sheath area with a calibration gas during calibration.
Gas sampling system for reactive gas-solid mixtures
Daum, Edward D.; Downs, William; Jankura, Bryan J.; McCoury, Jr., John M.
1990-01-01
An apparatus and method for sampling gas containing a reactive particulate solid phase flowing through a duct and for communicating a representative sample to a gas analyzer. A sample probe sheath 32 with an angular opening 34 extends vertically into a sample gas duct 30. The angular opening 34 is opposite the gas flow. A gas sampling probe 36 concentrically located within sheath 32 along with calibration probe 40 partly extends in the sheath 32. Calibration probe 40 extends further in the sheath 32 than gas sampling probe 36 for purging the probe sheath area with a calibration gas during calibration.
NASA Astrophysics Data System (ADS)
Pool, Sandra; Viviroli, Daniel; Seibert, Jan
2017-11-01
Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.
NASA Astrophysics Data System (ADS)
Vaudour, E.; Gilliot, J. M.; Bel, L.; Lefevre, J.; Chehdi, K.
2016-07-01
This study aimed at identifying the potential of Vis-NIR airborne hyperspectral AISA-Eagle data for predicting the topsoil organic carbon (SOC) content of bare cultivated soils over a large peri-urban area (221 km2) with both contrasted soils and SOC contents, located in the western region of Paris, France. Soil types comprised haplic luvisols, calcaric cambisols and colluvic cambisols. Airborne AISA-Eagle data (400-1000 nm, 126 bands) with 1 m-resolution were acquired on 17 April 2013 over 13 tracks. Tracks were atmospherically corrected then mosaicked at a 2 m-resolution using a set of 24 synchronous field spectra of bare soils, black and white targets and impervious surfaces. The land use identification system layer (RPG) of 2012 was used to mask non-agricultural areas, then calculation and thresholding of NDVI from an atmospherically corrected SPOT image acquired the same day enabled to map agricultural fields with bare soil. A total of 101 sites sampled either in 2013 or in the 3 previous years and in 2015 were identified as bare by means of this map. Predictions were made from the mosaic AISA spectra which were related to topsoil SOC contents by means of partial least squares regression (PLSR). Regression robustness was evaluated through a series of 1000 bootstrap data sets of calibration-validation samples, considering 74 sites outside cloud shadows only, and different sampling strategies for selecting calibration samples. Validation root-mean-square errors (RMSE) were comprised between 3.73 and 4.49 g Kg-1 and were ∼4 g Kg-1 in median. The most performing models in terms of coefficient of determination (R2) and Residual Prediction Deviation (RPD) values were the calibration models derived either from Kennard-Stone or conditioned Latin Hypercube sampling on smoothed spectra. The most generalizable model leading to lowest RMSE value of 3.73 g Kg-1 at the regional scale and 1.44 g Kg-1 at the within-field scale and low bias was the cross-validated leave-one-out PLSR model constructed with the 28 near-synchronous samples and raw spectra.
Tao, Lin-Li; Yang, Xiu-Juan; Deng, Jun-Ming; Zhang, Xi
2013-11-01
In contrast to conventional methods for the determination of meat chemical composition, near infrared reflectance spectroscopy enables rapid, simple, secure and simultaneous assessment of numerous meat properties. The present review focuses on the use of near infrared reflectance spectroscopy to predict meat chemical compositions. The potential of near infrared reflectance spectroscopy to predict crude protein, intramuscular fat, fatty acid, moisture, ash, myoglobin and collagen of beef, pork, chicken and lamb is reviewed. This paper discusses existing questions and reasons in the current research. According to the published results, although published results vary considerably, they suggest that near-infrared reflectance spectroscopy shows a great potential to replace the expensive and time-consuming chemical analysis of meat composition. In particular, under commercial conditions where simultaneous measurements of different chemical components are required, near infrared reflectance spectroscopy is expected to be the method of choice. The majority of studies selected feature-related wavelengths using principal components regression, developed the calibration model using partial least squares and modified partial least squares, and estimated the prediction accuracy by means of cross-validation using the same sample set previously used for the calibration. Meat fatty acid composition predicted by near-infrared spectroscopy and non-destructive prediction and visualization of chemical composition in meat using near-infrared hyperspectral imaging and multivariate regression are the hot studying field now. On the other hand, near infrared reflectance spectroscopy shows great difference for predicting different attributes of meat quality which are closely related to the selection of calibration sample set, preprocessing of near-infrared spectroscopy and modeling approach. Sample preparation also has an important effect on the reliability of NIR prediction; in particular, lack of homogeneity of the meat samples influenced the accuracy of estimation of chemical components. In general the predicting results of intramuscular fat, fatty acid and moisture are best, the predicting results of crude protein and myoglobin are better, while the predicting results of ash and collagen are less accurate.
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, A; Ahmad, M; Chen, Z
2014-06-01
Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less
Grelet, C; Bastin, C; Gelé, M; Davière, J-B; Johan, M; Werner, A; Reding, R; Fernandez Pierna, J A; Colinet, F G; Dardenne, P; Gengler, N; Soyeurt, H; Dehareng, F
2016-06-01
To manage negative energy balance and ketosis in dairy farms, rapid and cost-effective detection is needed. Among the milk biomarkers that could be useful for this purpose, acetone and β-hydroxybutyrate (BHB) have been proved as molecules of interest regarding ketosis and citrate was recently identified as an early indicator of negative energy balance. Because Fourier transform mid-infrared spectrometry can provide rapid and cost-effective predictions of milk composition, the objective of this study was to evaluate the ability of this technology to predict these biomarkers in milk. Milk samples were collected in commercial and experimental farms in Luxembourg, France, and Germany. Acetone, BHB, and citrate contents were determined by flow injection analysis. Milk mid-infrared spectra were recorded and standardized for all samples. After edits, a total of 548 samples were used in the calibration and validation data sets for acetone, 558 for BHB, and 506 for citrate. Acetone content ranged from 0.020 to 3.355mmol/L with an average of 0.103mmol/L; BHB content ranged from 0.045 to 1.596mmol/L with an average of 0.215mmol/L; and citrate content ranged from 3.88 to 16.12mmol/L with an average of 9.04mmol/L. Acetone and BHB contents were log-transformed and a part of the samples with low values was randomly excluded to approach a normal distribution. The 3 edited data sets were then randomly divided into a calibration data set (3/4 of the samples) and a validation data set (1/4 of the samples). Prediction equations were developed using partial least square regression. The coefficient of determination (R(2)) of cross-validation was 0.73 for acetone, 0.71 for BHB, and 0.90 for citrate with root mean square error of 0.248, 0.109, and 0.70mmol/L, respectively. Finally, the external validation was performed and R(2) obtained were 0.67 for acetone, 0.63 for BHB, and 0.86 for citrate, with respective root mean square error of validation of 0.196, 0.083, and 0.76mmol/L. Although the practical usefulness of the equations developed should be further verified with other field data, results from this study demonstrated the potential of Fourier transform mid-infrared spectrometry to predict citrate content with good accuracy and to supply indicative contents of BHB and acetone in milk, thereby providing rapid and cost-effective tools to manage ketosis and negative energy balance in dairy farms. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
NASA Astrophysics Data System (ADS)
Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K.; Ramanujam, Nirmala
2011-03-01
A significant challenge in detecting cervical pre-cancer in low-resource settings is the lack of effective screening facilities and trained personnel to detect the disease before it is advanced. Light based technologies, particularly quantitative optical spectroscopy, have the potential to provide an effective, low cost, and portable solution for cervical pre-cancer screening in these communities. We have developed and characterized a portable USB-powered optical spectroscopic system to quantify total hemoglobin content, hemoglobin saturation, and reduced scattering coefficient of cervical tissue in vivo. The system consists of a high-power LED as light source, a bifurcated fiber optic assembly, and two USB spectrometers for sample and calibration spectra acquisitions. The system was subsequently tested in Leogane, Haiti, where diffuse reflectance spectra from 33 colposcopically normal sites in 21 patients were acquired. Two different calibration methods, i.e., a post-study diffuse reflectance standard measurement and a real time self-calibration channel were studied. Our results suggest that a self-calibration channel enabled more accurate extraction of scattering contrast through simultaneous real-time correction of intensity drifts in the system. A self-calibration system also minimizes operator bias and required training. Hence, future contact spectroscopy or imaging systems should incorporate a selfcalibration channel to reliably extract scattering contrast.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
Digital dental photography. Part 6: camera settings.
Ahmad, I
2009-07-25
Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.
A miniature 48-channel pressure sensor module capable of in situ calibration
NASA Technical Reports Server (NTRS)
Gross, C.; Juanarena, D. B.
1977-01-01
A new high data rate pressure sensor module with in situ calibration capability has been developed by the Langley Research Center to help reduce energy consumption in wind-tunnel facilities without loss of measurement accuracy. The sensor module allows for nearly a two order of magnitude increase in data rates over conventional electromechanically scanned pressure sampling techniques. This module consists of 16 solid state pressure sensor chips and signal multiplexing electronics integrally mounted to a four position pressure selector switch. One of the four positions of the pressure selector switch allows the in situ calibration of the 16 pressure sensors; the three other positions allow 48 channels (three sets of 16) pressure inputs to be measured by sensors. The small size of the sensor module will allow mounting within many wind-tunnel models, thus eliminating long tube lengths and their corresponding slow pressure response.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
A rapid identification of four medicinal chrysanthemum varieties with near infrared spectroscopy.
Han, Bangxing; Yan, Hui; Chen, Cunwu; Yao, Houjun; Dai, Jun; Chen, Naifu
2014-07-01
For genuine medicinal material in Chinese herbs; the efficient, rapid, and precise identification is the focus and difficulty in the filed studying Chinese herbal medicines. Chrysanthemum morifolium as herbs has a long planting history in China, culturing high quality ones and different varieties. Different chrysanthemum varieties differ in quality, chemical composition, functions, and application. Therefore, chrysanthemum varieties in the market demands precise identification to provide reference for reasonable and correct application as genuine medicinal material. A total of 244 batches of chrysanthemum samples were randomly divided into calibration set (160 batches) and prediction set (84 batches). The near infrared diffuses reflectance spectra of chrysanthemum varieties were preprocessed by first order derivative (D1) and autoscaling and was built model with partial least squares (PLS). In this study of four chrysanthemum varieties identification, the accuracy rates in calibration sets of Boju, Chuju, Hangju, and Gongju are respectively 100, 100, 98.65, and 96.67%; while the accuracy rates in prediction sets are 100% except for 99.1% of Hangju. The research results demonstrate that the qualitative analysis can be conducted by machine learning combined with near infrared spectroscopy (NIR), which provides a new method for rapid and noninvasive identification of chrysanthemum varieties.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
Ozdemir, Durmus; Dinc, Erdal
2004-07-01
Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.
Thin film surface treatments for lowering dust adhesion on Mars Rover calibration targets
NASA Astrophysics Data System (ADS)
Sabri, F.; Werhner, T.; Hoskins, J.; Schuerger, A. C.; Hobbs, A. M.; Barreto, J. A.; Britt, D.; Duran, R. A.
The current generation of calibration targets on Mars Rover serve as a color and radiometric reference for the panoramic camera. They consist of a transparent silicon-based polymer tinted with either color or grey-scale pigments and cast with a microscopically rough Lambertian surface for a diffuse reflectance pattern. This material has successfully withstood the harsh conditions existent on Mars. However, the inherent roughness of the Lambertian surface (relative to the particle size of the Martian airborne dust) and the tackiness of the polymer in the calibration targets has led to a serious dust accumulation problem. In this work, non-invasive thin film technology was successfully implemented in the design of future generation calibration targets leading to significant reduction of dust adhesion and capture. The new design consists of a μm-thick interfacial layer capped with a nm-thick optically transparent layer of pure metal. The combination of these two additional layers is effective in burying the relatively rough Lambertian surface while maintaining diffuse properties of the samples which is central to the correct operation as calibration targets. A set of these targets are scheduled for flight on the Mars Phoenix mission.
Ding, Haiquan; Lu, Qipeng; Gao, Hongzhi; Peng, Zhongqi
2014-01-01
To facilitate non-invasive diagnosis of anemia, specific equipment was developed, and non-invasive hemoglobin (HB) detection method based on back propagation artificial neural network (BP-ANN) was studied. In this paper, we combined a broadband light source composed of 9 LEDs with grating spectrograph and Si photodiode array, and then developed a high-performance spectrophotometric system. By using this equipment, fingertip spectra of 109 volunteers were measured. In order to deduct the interference of redundant data, principal component analysis (PCA) was applied to reduce the dimensionality of collected spectra. Then the principal components of the spectra were taken as input of BP-ANN model. On this basis we obtained the optimal network structure, in which node numbers of input layer, hidden layer, and output layer was 9, 11, and 1. Calibration and correction sample sets were used for analyzing the accuracy of non-invasive hemoglobin measurement, and prediction sample set was used for testing the adaptability of the model. The correlation coefficient of network model established by this method is 0.94, standard error of calibration, correction, and prediction are 11.29g/L, 11.47g/L, and 11.01g/L respectively. The result proves that there exist good correlations between spectra of three sample sets and actual hemoglobin level, and the model has a good robustness. It is indicated that the developed spectrophotometric system has potential for the non-invasive detection of HB levels with the method of BP-ANN combined with PCA. PMID:24761296
NASA Technical Reports Server (NTRS)
Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.
Chu, Byoung-Sun; Ngo, Thao P T; Cheng, Brian B; Dain, Stephen J
2014-07-01
The accuracy and precision of any instrument should not be taken for granted. While there is an international standard for checking focimeters, there is no report of any study on their performance. A sample set of 51 focimeters (11 brands), were used to measure the spherical power of a set of lenses and the prismatic power of two lenses complying with ISO 9342-1:2005 and other calibrated prismatic lenses and the spherical power of some grey filters. The mean measured spherical power corresponded very closely with the calibrated values; however, the spread of results was substantial and 10 focimeters did not comply with ISO 8598:1996. The measurement of prism was much more accurate and precise and all the focimeters complied easily. With the grey filters, about one-third of the focimeters either showed erratic reading or an error with the equivalent of category 4 sunglasses. On the other hand, nine focimeters had stable and accurate reading on a filter with a luminous transmittance of 0.5 per cent. These results confirm that, in common with all other measurement instruments, there is a need to ensure that a focimeter is reading accurately and precisely over the range of refractive powers and luminous transmittances. The accurate and precise performance of an automated focimeter over its working life cannot be assumed. Checking before purchase with a set of calibrated lenses and some dark sunglass tints will indicate the suitability of a focimeter. Routine checking with the calibrated lenses will inform the users if a focimeter continues to indicate accurately. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.
Near infrared spectroscopy (NIRS) for on-line determination of quality parameters in intact olives.
Salguero-Chaparro, Lourdes; Baeten, Vincent; Fernández-Pierna, Juan A; Peña-Rodríguez, Francisco
2013-08-15
The acidity, moisture and fat content in intact olive fruits were determined on-line using a NIR diode array instrument, operating on a conveyor belt. Four sets of calibrations models were obtained by means of different combinations from samples collected during 2009-2010 and 2010-2011, using full-cross and external validation. Several preprocessing treatments such as derivatives and scatter correction were investigated by using the root mean square error of cross-validation (RMSECV) and prediction (RMSEP), as control parameters. The results obtained showed RMSECV values of 2.54-3.26 for moisture, 2.35-2.71 for fat content and 2.50-3.26 for acidity parameters, depending on the calibration model developed. Calibrations for moisture, fat content and acidity gave residual predictive deviation (RPD) values of 2.76, 2.37 and 1.60, respectively. Although, it is concluded that the on-line NIRS prediction results were acceptable for the three parameters measured in intact olive samples in movement, the models developed must be improved in order to increase their accuracy before final NIRS implementation at mills. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy; Parra, Amanda; Russell, Marion
Diffusive or passive sampling methods using commercially filled axial-sampling thermal desorption tubes are widely used for measuring volatile organic compounds (VOCs) in air. The passive sampling method provides a robust, cost effective way to measure air quality with time-averaged concentrations spanning up to a week or more. Sampling rates for VOCs can be calculated using tube geometry and Fick’s Law for ideal diffusion behavior or measured experimentally. There is evidence that uptake rates deviate from ideal and may not be constant over time. Therefore, experimentally measured sampling rates are preferred. In this project, a calibration chamber with a continuous stirredmore » tank reactor design and constant VOC source was combined with active sampling to generate a controlled dynamic calibration environment for passive samplers. The chamber air was augmented with a continuous source of 45 VOCs ranging from pentane to diethyl phthalate representing a variety of chemical classes and physiochemical properties. Both passive and active samples were collected on commercially filled Tenax TA thermal desorption tubes over an 11-day period and used to calculate passive sampling rates. A second experiment was designed to determine the impact of ozone on passive sampling by using the calibration chamber to passively load five terpenes on a set of Tenax tubes and then exposing the tubes to different ozone environments with and without ozone scrubbers attached to the tube inlet. During the sampling rate experiment, the measured diffusive uptake was constant for up to seven days for most of the VOCs tested but deviated from linearity for some of the more volatile compounds between seven and eleven days. In the ozone experiment, both exposed and unexposed tubes showed a similar decline in terpene mass over time indicating back diffusion when uncapped tubes were transferred to a clean environment but there was no indication of significant loss by ozone reaction.« less
Internal calibration on adjacent samples (InCAS) with Fourier transform mass spectrometry.
O'Connor, P B; Costello, C E
2000-12-15
Using matrix-assisted laser desorption/ionization (MAL DI) on a trapped ion mass spectrometer such as a Fourier transform mass spectrometer (FTMS) allows accumulation of ions in the cell from multiple laser shots prior to detection. If ions from separate MALDI samples are accumulated simultaneously in the cell, ions from one sample can be used to calibrate ions from the other sample. Since the ions are detected simultaneously in the cell, this is, in effect, internal calibration, but there are no selective desorption effects in the MALDI source. This method of internal calibration with adjacent samples is demonstrated here on cesium iodide clusters, peptides, oligosaccharides, poly(propylene glycol), and fullerenes and provides typical FTMS internal calibration mass accuracy of < 1 ppm.
NASA Astrophysics Data System (ADS)
Pavlovic, J.; Kinsey, J. S.; Hays, M. D.
2014-09-01
Thermal-optical analysis (TOA) is a widely used technique that fractionates carbonaceous aerosol particles into organic and elemental carbon (OC and EC), or carbonate. Thermal sub-fractions of evolved OC and EC are also used for source identification and apportionment; thus, oven temperature accuracy during TOA analysis is essential. Evidence now indicates that the "actual" sample (filter) temperature and the temperature measured by the built-in oven thermocouple (or set-point temperature) can differ by as much as 50 °C. This difference can affect the OC-EC split point selection and consequently the OC and EC fraction and sub-fraction concentrations being reported, depending on the sample composition and in-use TOA method and instrument. The present study systematically investigates the influence of an oven temperature calibration procedure for TOA. A dual-optical carbon analyzer that simultaneously measures transmission and reflectance (TOT and TOR) is used, functioning under the conditions of both the National Institute of Occupational Safety and Health Method 5040 (NIOSH) and Interagency Monitoring of Protected Visual Environment (IMPROVE) protocols. The application of the oven calibration procedure to our dual-optics instrument significantly changed NIOSH 5040 carbon fractions (OC and EC) and the IMPROVE OC fraction. In addition, the well-known OC-EC split difference between NIOSH and IMPROVE methods is even further perturbed following the instrument calibration. Further study is needed to determine if the widespread application of this oven temperature calibration procedure will indeed improve accuracy and our ability to compare among carbonaceous aerosol studies that use TOA.
Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus
2017-01-01
During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .
Investigation of archaeological metal artefacts by laser-induced breakdown spectroscopy (LIBS)
NASA Astrophysics Data System (ADS)
Tankova, V.; Malcheva, G.; Blagoev, K.; Leshtakov, L.
2018-03-01
In this work, laser-induced breakdown spectroscopy was applied to determining the elemental composition of a set of ancient bronze artefacts dated from the Late Bronze Age and Early Iron Age (14th – 10th century BC). We used a Nd:YAG laser at 1064 nm with pulse duration of 10 ns and energy of 10 mJ and determined the elemental composition of the bronze alloy that was used in manufacturing the samples under study. The concentrations of tin and lead in the bulk of the examined materials was estimated after generating calibration curves for a set of four standard samples. The preliminary results of the analysis will provide information on the artefacts provenance and on the production process.
An airborne sunphotometer for use with helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walthall, C.L.; Halthore, R.N.; Elman, G.C.
1996-04-01
One solution for atmospheric correction and calibration of remotely sensed data from airborne platforms is the use of radiometrically calibrated instruments, sunphotometers and an atmospheric radiative transfer model. Sunphotometers are used to measure the direct solar irradiance at the level at which they are operating and the data are used in the computation of atmospheric optical depth. Atmospheric optical depth is an input to atmospheric correction algorithms that convert at-sensor radiance to required surface properties such as reflectance and temperature. Airborne sun photometry has thus far seen limited use and has not been used with a helicopter platform. The hardware,more » software, calibration and deployment of an automatic sun-tracking sunphotometer specifically designed for use on a helicopter are described. Sample data sets taken with the system during the 1994 Boreal Ecosystem and Atmosphere Study (BOREAS) are presented. The addition of the sun photometer to the helicopter system adds another tool for monitoring the environment and makes the helicopter remote sensing system capable of collecting calibrated, atmospherically corrected data independent of the need for measurements from other systems.« less
NASA Astrophysics Data System (ADS)
Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with conventional liquid measurements, and by analyzing IAEA-153 reference material (Trace Elements in Milk Powder); a good agreement with the certified value for phosphorus was obtained.
A new statistical distance scale for planetary nebulae
NASA Astrophysics Data System (ADS)
Ali, Alaa; Ismail, H. A.; Alsolami, Z.
2015-05-01
In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.
2016-01-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323
McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.
McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca
2016-01-01
Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469
NASA Technical Reports Server (NTRS)
Groot, J. S.
1990-01-01
In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.
NASA Astrophysics Data System (ADS)
Alaoui, G.; Leger, M.; Gagne, J.; Tremblay, L.
2009-05-01
The goal of this work was to evaluate the capability of infrared reflectance spectroscopy for a fast quantification of the elemental and molecular compositions of sedimentary and particulate organic matter (OM). A partial least-squares (PLS) regression model was used for analysis and values were compared to those obtained by traditional methods (i.e., elemental, humic and HPLC analyses). PLS tools are readily accessible from software such as GRAMS (Thermo-Fisher) used in spectroscopy. This spectroscopic-chemometric approach has several advantages including its rapidity and use of whole unaltered samples. To predict properties, a set of infrared spectra from representative samples must first be fitted to form a PLS calibration model. In this study, a large set (180) of sediments and particles on GFF filters from the St. Lawrence estuarine system were used. These samples are very heterogenous (e.g., various tributaries, terrigenous vs. marine, events such as landslides and floods) and thus represent a challenging test for PLS prediction. For sediments, the infrared spectra were obtained with a diffuse reflectance, or DRIFT, accessory. Sedimentary carbon, nitrogen, humic substance contents as well as humic substance proportions in OM and N:C ratios were predicted by PLS. The relative root mean square error of prediction (%RMSEP) for these properties were between 5.7% (humin content) and 14.1% (total humic substance yield) using the cross-validation, or leave-one out, approach. The %RMSEP calculated by PLS for carbon content was lower with the PLS model (7.6%) than with an external calibration method (11.7%) (Tremblay and Gagné, 2002, Anal. Chem., 74, 2985). Moreover, the PLS approach does not require the extraction of POM needed in external calibration. Results highlighted the importance of using a PLS calibration set representative of the unknown samples (e.g., same area). For filtered particles, the infrared spectra were obtained using a novel approach based on attenuated total reflectance, or ATR, allowing the direct analysis of the filters. In addition to carbon and nitrogen contents, amino acid and muramic acid (a bacterial biomarker) yields were predicted using PLS. Calculated %RMSEP varied from 6.4% (total amino acid content) to 18.6% (muramic acid content) with cross-validation. PLS regression modeling does not require a priori knowledge of the spectral bands associated with the properties to be predicted. In turn, the spectral regions that give good PLS predictions provided valuable information on band assignment and geochemical processes. For instance, nitrogen and humin contents were greatly determined by an absorption band caused by aluminosilicate OH group. This supports the idea that OM-clay interactions, important in humin formation and OM preservation, are mediated by nitrogen-containing groups.
Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan
2010-01-01
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
Conway, Thomas [NOAA Climate Monitoring and Diagnostics Laboratory, Boulder, CO (USA); Tans, Pieter [NOAA Climate Monitoring and Diagnostics Laboratory, Boulder, CO (USA)
2009-01-01
The National Oceanic and Atmospheric Administration's Climate Monitoring and Diagnostics Laboratory (NOAA/CMDL) has measured CO2 in air samples collected weekly at a global network of sites since the late 1960s. Atmospheric CO2 mixing ratios reported in these files were measured by a nondispersive infrared absorption technique in air samples collected in glass flasks. All CMDL flask samples are measured relative to standards traceable to the World Meteorological Organization (WMO) CO2 mole fraction scale. These measurements constitute the most geographically extensive, carefully calibrated, internally consistent atmospheric CO2 data set available and are essential for studies aimed at better understanding the global carbon cycle budget.
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
NASA Astrophysics Data System (ADS)
Li, Wenlong; Cheng, Zhiwei; Wang, Yuefei; Qu, Haibin
2013-01-01
In this paper we describe the strategy used in the development and validation of a near infrared spectroscopy method for the rapid determination of baicalin, chlorogenic acid, ursodeoxycholic acid (UDCA), chenodeoxycholic acid (CDCA), and the total solid contents (TSCs) in the Tanreqing injection. To increase the representativeness of calibration sample set, a concentrating-diluting method was adopted to artificially prepare samples. Partial least square regression (PLSR) was used to establish calibration models, with which the five quality indicators can be determined with satisfied accuracy and repeatability. In addition, the slope/bias (S/B) method was used for the models transfer between two different types of NIR instruments from the same manufacturer, which is contributing to enlarge the application range of the established models. With the presented method, a great deal of time, effort and money can be saved when large amounts of Tanreqing injection samples need to be analyzed in a relatively short period of time, which is of great significance to the traditional Chinese medicine (TCM) industries.
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Green, Michael V; Seidel, Jurgen; Choyke, Peter L; Jagoda, Elaine M
2017-10-01
We describe a simple fixture that can be added to the imaging bed of a small-animal PET scanner that allows for automated counting of multiple organ or tissue samples from mouse-sized animals and counting of injection syringes prior to administration of the radiotracer. The combination of imaging and counting capabilities in the same machine offers advantages in certain experimental settings. A polyethylene block of plastic, sculpted to mate with the animal imaging bed of a small-animal PET scanner, is machined to receive twelve 5-ml containers, each capable of holding an entire organ from a mouse-sized animal. In addition, a triangular cross-section slot is machined down the centerline of the block to secure injection syringes from 1-ml to 3-ml in size. The sample holder is scanned in PET whole-body mode to image all samples or in one bed position to image a filled injection syringe. Total radioactivity in each sample or syringe is determined from the reconstructed images of these objects using volume re-projection of the coronal images and a single region-of-interest for each. We tested the accuracy of this method by comparing PET estimates of sample and syringe activity with well counter and dose calibrator estimates of these same activities. PET and well counting of the same samples gave near identical results (in MBq, R 2 =0.99, slope=0.99, intercept=0.00-MBq). PET syringe and dose calibrator measurements of syringe activity in MBq were also similar (R 2 =0.99, slope=0.99, intercept=- 0.22-MBq). A small-animal PET scanner can be easily converted into a multi-sample and syringe counting device by the addition of a sample block constructed for that purpose. This capability, combined with live animal imaging, can improve efficiency and flexibility in certain experimental settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C
2015-03-01
Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.
Ikeogu, Ugochukwu N; Davrieux, Fabrice; Dufour, Dominique; Ceballos, Hernan; Egesi, Chiedozie N; Jannink, Jean-Luc
2017-01-01
Portable Vis/NIRS are flexible tools for fast and unbiased analyses of constituents with minimal sample preparation. This study developed calibration models for dry matter content (DMC) and carotenoids in fresh cassava roots using a portable Vis/NIRS system. We examined the effects of eight data pre-treatment combinations on calibration models and assessed calibrations on processed and intact root samples. We compared Vis/NIRS derived-DMC to other phenotyping methods. The results of the study showed that the combination of standard normal variate and de-trend (SNVD) with first derivative calculated on two data points and no smoothing (SNVD+1111) was adequate for a robust model. Calibration performance was higher with processed than the intact root samples for all the traits although intact root models for some traits especially total carotenoid content (TCC) (R2c = 96%, R2cv = 90%, RPD = 3.6 and SECV = 0.63) were sufficient for screening purposes. Using three key quality traits as templates, we developed models with processed fresh root samples. Robust calibrations were established for DMC (R2c = 99%, R2cv = 95%, RPD = 4.5 and SECV = 0.9), TCC (R2c = 99%, R2cv = 91%, RPD = 3.5 and SECV = 2.1) and all Trans β-carotene (ATBC) (R2c = 98%, R2cv = 91%, RPD = 3.5 and SECV = 1.6). Coefficient of determination on independent validation set (R2p) for these traits were also satisfactory for ATBC (91%), TCC (88%) and DMC (80%). Compared to other methods, Vis/NIRS-derived DMC from both intact and processed roots had very high correlation (>0.95) with the ideal oven-drying than from specific gravity method (0.49). There was equally a high correlation (0.94) between the intact and processed Vis/NIRS DMC. Therefore, the portable Vis/NIRS could be employed for the rapid analyses of DMC and quantification of carotenoids in cassava for nutritional and breeding purposes.
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.
2012-01-01
A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…
NASA Technical Reports Server (NTRS)
Ohring, G.; Wielicki, B.; Spencer, R.; Emery, B.; Datla, R.
2004-01-01
Measuring the small changes associated with long-term global climate change from space is a daunting task. To address these problems and recommend directions for improvements in satellite instrument calibration some 75 scientists, including researchers who develop and analyze long-term data sets from satellites, experts in the field of satellite instrument calibration, and physicists working on state of the art calibration sources and standards met November 12 - 14, 2002 and discussed the issues. The workshop defined the absolute accuracies and long-term stabilities of global climate data sets that are needed to detect expected trends, translated these data set accuracies and stabilities to required satellite instrument accuracies and stabilities, and evaluated the ability of current observing systems to meet these requirements. The workshop's recommendations include a set of basic axioms or overarching principles that must guide high quality climate observations in general, and a roadmap for improving satellite instrument characterization, calibration, inter-calibration, and associated activities to meet the challenge of measuring global climate change. It is also recommended that a follow-up workshop be conducted to discuss implementation of the roadmap developed at this workshop.
Davrieux, Fabrice; Allal, François; Piombo, Georges; Kelly, Bokary; Okulo, John B; Thiam, Massamba; Diallo, Ousmane B; Bouvet, Jean-Marc
2010-07-14
The Shea tree (Vitellaria paradoxa) is a major tree species in African agroforestry systems. Butter extracted from its nuts offers an opportunity for sustainable development in Sudanian countries and an attractive potential for the food and cosmetics industries. The purpose of this study was to develop near-infrared spectroscopy (NIRS) calibrations to characterize Shea nut fat profiles. Powders prepared from nuts collected from 624 trees in five African countries (Senegal, Mali, Burkina Faso, Ghana and Uganda) were analyzed for moisture content, fat content using solvent extraction, and fatty acid profiles using gas chromatography. Results confirmed the differences between East and West African Shea nut fat composition: eastern nuts had significantly higher fat and oleic acid contents. Near infrared reflectance spectra were recorded for each sample. Ten percent of the samples were randomly selected for validation and the remaining samples used for calibration. For each constituent, calibration equations were developed using modified partial least squares (MPLS) regression. The equation performances were evaluated using the ratio performance to deviation (RPD(p)) and R(p)(2) parameters, obtained by comparison of the validation set NIR predictions and corresponding laboratory values. Moisture (RPD(p) = 4.45; R(p)(2) = 0.95) and fat (RPD(p) = 5.6; R(p)(2) = 0.97) calibrations enabled accurate determination of these traits. NIR models for stearic (RPD(p) = 6.26; R(p)(2) = 0.98) and oleic (RPD(p) = 7.91; R(p)(2) = 0.99) acids were highly efficient and enabled sharp characterization of these two major Shea butter fatty acids. This study demonstrated the ability of near-infrared spectroscopy for high-throughput phenotyping of Shea nuts.
Improving Calibration of the MBH-σ* Relation for AGN with the BRAVE Program
NASA Astrophysics Data System (ADS)
Batiste, Merida; Bentz, Misty C.; Manne-Nicholas, Emily; Raimundo, Sandra I.; Onken, Christopher A.; Vestergaard, Marianne; Bershady, Matthew A.
2017-01-01
The MBH - σ* relation for AGN, which relates the mass of the central supermassive black hole (MBH) to the bulge stellar velocity dispersion (σ*) of the host galaxy, is a powerful tool for studying the evolution of structure across cosmic time. Accurate calibration of this relation is essential, and much effort has been put into improving MBH determinations with this in mind. However calibration remains difficult because many nearby AGN with secure MBH determinations are hosted by late-type galaxies, with significant kinematic substructure such as bars, disks and rings. Kinematic substructure is known to contaminate and bias σ* determinations from long-slit and single aperture spectroscopy, ultimately limiting the utility of the MBH - σ* relation, and hampering efforts to investigate morphological dependencies. Integral-field spectroscopy (IFS) can be used to map the two dimensional kinematics, providing a method for measuring σ* absent some of the biases inherent in other methods, and giving a more complete picture of the spatial variations in the dynamics. We present the first set of results from the BRAVE program, the long-term goal of which is to use IFS to more accurately determine σ* for the calibrating sample of reverberation-mapped AGN. We present IFS kinematic maps for the sample of galaxies we have so far observed, which show clearly how spatial variation can impact σ* determinations from long-slit spectroscopy. We present a new fit to the MBH - σ* relation for the sample of 16 reverberation-mapped AGN for which we currently have σ* determinations from IFS, as well as a new determination of the virial scaling factor, f, for use with reverberation-mapping.
Improved uncertainty quantification in nondestructive assay for nonproliferation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Croft, Stephen; Jarman, Ken
2016-12-01
This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less
NASA Astrophysics Data System (ADS)
Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.
2010-09-01
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S
2010-09-21
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
NASA Astrophysics Data System (ADS)
Yahaya, O. K. M.; MatJafri, M. Z.; Aziz, A. A.; Omar, A. F.
2015-05-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R2 = 0.839 and RMSEP = 0.16 pH.
Implications of Version 8 TOMS and SBUV Data for Long-Term Trend Analysis
NASA Technical Reports Server (NTRS)
Frith, Stacey M.
2004-01-01
Total ozone data from the Total Ozone Mapping Spectrometer (TOMS) and profile/total ozone data from the Solar Backscatter Ultraviolet (SBUV; SBW/2) series of instruments have recently been reprocessed using new retrieval algorithms (referred to as Version 8 for both) and updated calibrations. In this paper, we incorporate the Version 8 data into a TOMS/SBW merged total ozone data set and an S B W merged profile ozone data set. The Total Merged Ozone Data (Total MOD) combines data from multiple TOMS and SBW instruments to form an internally consistent global data set with virtually complete time coverage from October 1978 through December 2003. Calibration differences between instruments are accounted for using external adjustments based on instrument intercomparisons during overlap periods. Previous results showed errors due to aerosol loading and sea glint are significantly reduced in the V8 TOMS retrievals. Using SBW as a transfer standard, calibration differences between V8 Nimbus 7 and Earth Probe TOMS data are approx. 1.3%, suggesting small errors in calibration remain. We will present updated total ozone long-term trends based on the Version 8 data. The Profile Merged Ozone Data (Profile MOD) data set is constructed using data from the SBUV series of instruments. In previous versions, SAGE data were used to establish the long-term external calibration of the combined data set. The SBW Version 8 we assess the V8 profile data through comparisons with SAGE and between SBW instruments in overlap periods. We then construct a consistently-calibrated long term time series. Updated zonal mean trends as a function of altitude and season from the new profile data set will be shown, and uncertainties in determining the best long-term calibration will be discussed.
[Identification of Pummelo Cultivars Based on Hyperspectral Imaging Technology].
Li, Xun-lan; Yi, Shi-lai; He, Shao-lan; Lü, Qiang; Xie, Rang-jin; Zheng, Yong-qiang; Deng, Lie
2015-09-01
Existing methods for the identification of pummelo cultivars are usually time-consuming and costly, and are therefore inconvenient to be used in cases that a rapid identification is needed. This research was aimed at identifying different pummelo cultivars by hyperspectral imaging technology which can achieve a rapid and highly sensitive measurement. A total of 240 leaf samples, 60 for each of the four cultivars were investigated. Samples were divided into two groups such as calibration set (48 samples of each cultivar) and validation set (12 samples of each cultivar) by a Kennard-Stone-based algorithm. Hyperspectral images of both adaxial and abaxial surfaces of each leaf were obtained, and were segmented into a region of interest (ROI) using a simple threshold. Spectra of leaf samples were extracted from ROI. To remove the absolute noises of the spectra, only the date of spectral range 400~1000 nm was used for analysis. Multiplicative scatter correction (MSC) and standard normal variable (SNV) were utilized for data preprocessing. Principal component analysis (PCA) was used to extract the best principal components, and successive projections algorithm (SPA) was used to extract the effective wavelengths. Least squares support vector machine (LS-SVM) was used to obtain the discrimination model of the four different pummelo cultivars. To find out the optimal values of σ2 and γ which were important parameters in LS-SVM modeling, Grid-search technique and Cross-Validation were applied. The first 10 and 11 principal components were extracted by PCA for the hyperspectral data of adaxial surface and abaxial surface, respectively. There were 31 and 21 effective wavelengths selected by SPA based on the hyperspectral data of adaxial surface and abaxial surface, respectively. The best principal components and the effective wavelengths were used as inputs of LS-SVM models, and then the PCA-LS-SVM model and the SPA-LS-SVM model were built. The results showed that 99.46% and 98.44% of identification accuracy was achieved in the calibration set for the PCA-LS-SVM model and the SPA-LS-SVM model, respectively, and a 95.83% of identification accuracy was achieved in the validation set for both the PCA-LS-SVM and the SPA- LS-SVM models, which were built based on the hyperspectral data of adaxial surface. Comparatively, the results of the PCA-LS-SVM and the SPA-LS-SVM models built based on the hyperspectral data of abaxial surface both achieved identification accuracies of 100% for both calibration set and validation set. The overall results demonstrated that use of hyperspectral data of adaxial and abaxial leaf surfaces coupled with the use of PCA-LS-SVM and the SPA-LS-SVM could achieve an accurate identification of pummelo cultivars. It was feasible to use hyperspectral imaging technology to identify different pummelo cultivars, and hyperspectral imaging technology provided an alternate way of rapid identification of pummelo cultivars. Moreover, the results in this paper demonstrated that the data from the abaxial surface of leaf was more sensitive in identifying pummelo cultivars. This study provided a new method for to the fast discrimination of pummelo cultivars.
NASA Astrophysics Data System (ADS)
Verma, Shivcharan; Mohanty, Biraja P.; Singh, Karn P.; Kumar, Ashok
2018-02-01
The proton beam facility at variable energy cyclotron (VEC) Panjab University, Chandigarh, India is being used for Particle Induced X-ray Emission (PIXE) analysis of different environmental, biological and industrial samples. The PIXE method, however, does not provide any information of low Z elements like carbon, nitrogen, oxygen and fluorine. As a result of the increased need for rapid and multi-elemental analysis of biological and environmental samples, the PIXE facility was upgraded and standardized to facilitate simultaneous measurements using PIXE and Proton Elastic Scattering Analysis (PESA). Both PIXE and PESA techniques were calibrated and standardized individually. Finally, the set up was tested by carrying out simultaneous PIXE and PESA measurements using a 2 mm diameter proton beam of 2.7 MeV on few multilayered thin samples. The results obtained show excellent agreement between PIXE and PESA measurements and confirm adequate sensitivity and precision of the experimental set up.
A new spectroscopic calibration to determine Teff and [Fe/H] of FGK dwarfs and giants
NASA Astrophysics Data System (ADS)
Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.
2017-10-01
We present a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for FGK dwarfs and GK giant stars. We used spectra from a joint sample of 708 stars, composed by 451 FGK dwarfs and 257 GK-giant stars with homogeneously determined spectroscopic stellar parameters. We have derived 322 EW line-ratios and 100 FeI lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K < Teff < 6500 K, 2.5 < log g < 4.9 dex, and -0.8 < [Fe/H] < 0:5 dex. The new calibration has a standard deviation of 74 K for Teff and 0.07 dex for [Fe/H]. We use four independent samples of stars to test and verify the new calibration, a sample of giant stars, a sample composed of Gaia FGK benchmark stars, a sample of GK-giant stars from the DR1 of the Gaia-ESO survey, and a sample of FGK-dwarf stars. We present a new computer code, GeTCal, for automatically producing new calibration files based on any new sample of stars.
Shi, Jingjin; Chen, Fei'er; Cai, Yunfei; Fan, Shichen; Cai, Jing; Chen, Renjie; Kan, Haidong; Lu, Yihan; Zhao, Zhuohui
2017-01-01
Portable direct-reading instruments by light-scattering method are increasingly used in airborne fine particulate matter (PM2.5) monitoring. However, there are limited calibration studies on such instruments by applying the gravimetric method as reference method in field tests. An 8-month sampling was performed and 96 pairs of PM2.5 data by both the gravimetric method and the simultaneous light-scattering real-time monitoring (QT-50) were obtained from July, 2015 to February, 2016 in Shanghai. Temperature and relative humidity (RH) were recorded. Mann-Whitney U nonparametric test and Spearman correlation were used to investigate the differences between the two measurements. Multiple linear regression (MLR) model was applied to set up the calibration model for the light-scattering device. The average PM2.5 concentration (median) was 48.1μg/m3 (min-max 10.4-95.8μg/m3) by the gravimetric method and 58.1μg/m3 (19.2-315.9μg/m3) by the light-scattering method, respectively. By time trend analyses, they were significantly correlated with each other (Spearman correlation coefficient 0.889, P<0.01). By MLR, the calibration model for the light-scattering instrument was Y(calibrated) = 57.45 + 0.47 × X(the QT - 50 measurements) - 0.53 × RH - 0.41 × Temp with both RH and temperature adjusted. The 10-fold cross-validation R2 and the root mean squared error of the calibration model were 0.79 and 11.43 μg/m3, respectively. Light-scattering measurements of PM2.5 by QT-50 instrument overestimated the concentration levels and were affected by temperature and RH. The calibration model for QT-50 instrument was firstly set up against the gravimetric method with temperature and RH adjusted.
Shi, Jingjin; Chen, Fei’er; Cai, Yunfei; Fan, Shichen; Cai, Jing; Chen, Renjie; Kan, Haidong; Lu, Yihan
2017-01-01
Background Portable direct-reading instruments by light-scattering method are increasingly used in airborne fine particulate matter (PM2.5) monitoring. However, there are limited calibration studies on such instruments by applying the gravimetric method as reference method in field tests. Methods An 8-month sampling was performed and 96 pairs of PM2.5 data by both the gravimetric method and the simultaneous light-scattering real-time monitoring (QT-50) were obtained from July, 2015 to February, 2016 in Shanghai. Temperature and relative humidity (RH) were recorded. Mann-Whitney U nonparametric test and Spearman correlation were used to investigate the differences between the two measurements. Multiple linear regression (MLR) model was applied to set up the calibration model for the light-scattering device. Results The average PM2.5 concentration (median) was 48.1μg/m3 (min-max 10.4–95.8μg/m3) by the gravimetric method and 58.1μg/m3 (19.2–315.9μg/m3) by the light-scattering method, respectively. By time trend analyses, they were significantly correlated with each other (Spearman correlation coefficient 0.889, P<0.01). By MLR, the calibration model for the light-scattering instrument was Y(calibrated) = 57.45 + 0.47 × X(the QT – 50 measurements) – 0.53 × RH – 0.41 × Temp with both RH and temperature adjusted. The 10-fold cross-validation R2 and the root mean squared error of the calibration model were 0.79 and 11.43 μg/m3, respectively. Conclusion Light-scattering measurements of PM2.5 by QT-50 instrument overestimated the concentration levels and were affected by temperature and RH. The calibration model for QT-50 instrument was firstly set up against the gravimetric method with temperature and RH adjusted. PMID:29121101
NASA Astrophysics Data System (ADS)
Roberts, S. J.; Foster, L. C.; Pearson, E. J.; Steve, J.; Hodgson, D.; Saunders, K. M.; Verleyen, E.
2016-12-01
Temperature calibration models based on the relative abundances of sedimentary glycerol dialkyl glycerol tetraethers (GDGTs) have been used to reconstruct past temperatures in both marine and terrestrial environments, but have not been widely applied in high latitude environments. This is mainly because the performance of GDGT-temperature calibrations at lower temperatures and GDGT provenance in many lacustrine settings remains uncertain. To address these issues, we examined surface sediments from 32 Antarctic, sub-Antarctic and Southern Chilean lakes. First, we quantified GDGT compositions present and then investigated modern-day environmental controls on GDGT composition. GDGTs were found in all 32 lakes studied. Branched GDGTs (brGDGTs) were dominant in 31 lakes and statistical analyses showed that their composition was strongly correlated with mean summer air temperature (MSAT) rather than pH, conductivity or water depth. Second, we developed the first regional brGDGT-temperature calibration for Antarctic and sub-Antarctic lakes based on four brGDGT compounds (GDGT-Ib, GDGT-II, GDGT-III and GDGT-IIIb). Of these, GDGT-IIIb proved particularly important in cold lacustrine environments. Our brGDGT-Antarctic temperature calibration dataset has an improved statistical performance at low temperatures compared to previous global calibrations (r2=0.83, RMSE=1.45°C, RMSEP-LOO=1.68°C, n=36 samples), highlighting the importance of basing palaeotemperature reconstructions on regional GDGT-temperature calibrations, especially if specific compounds lead to improved model performance. Finally, we applied the new Antarctic brGDGT-temperature calibration to two key lake records from the Antarctic Peninsula and South Georgia. In both, downcore temperature reconstructions show similarities to known Holocene warm periods, providing proof of concept for the new Antarctic calibration model.
USDA-ARS?s Scientific Manuscript database
Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...
New approach to calibrating bed load samplers
Hubbell, D.W.; Stevens, H.H.; Skinner, J.V.; Beverage, J.P.
1985-01-01
Cyclic variations in bed load discharge at a point, which are an inherent part of the process of bed load movement, complicate calibration of bed load samplers and preclude the use of average rates to define sampling efficiencies. Calibration curves, rather than efficiencies, are derived by two independent methods using data collected with prototype versions of the Helley‐Smith sampler in a large calibration facility capable of continuously measuring transport rates across a 9 ft (2.7 m) width. Results from both methods agree. Composite calibration curves, based on matching probability distribution functions of samples and measured rates from different hydraulic conditions (runs), are obtained for six different versions of the sampler. Sampled rates corrected by the calibration curves agree with measured rates for individual runs.
Guerra, Heidi B; Park, Kisoo; Kim, Youngchul
2013-01-01
Due to the highly variable hydrologic quantity and quality of stormwater runoff, which requires more complex models for proper prediction of treatment, a relatively few and site-specific models for stormwater wetlands have been developed. In this study, regression models based on extensive operational data and wastewater wetlands were adapted to a stormwater wetland receiving both base flow and storm flow from an agricultural area. The models were calibrated in Excel Solver using 15 sets of operational data gathered from random sampling during dry days. The calibrated models were then applied to 20 sets of event mean concentration data from composite sampling during 20 independent rainfall events. For dry days, the models estimated effluent concentrations of nitrogen species that were close to the measured values. However, overestimations during wet days were made for NH(3)-N and total Kjeldahl nitrogen, which resulted from higher hydraulic loading rates and influent nitrogen concentrations during storm flows. The results showed that biological nitrification and denitrification was the major nitrogen removal mechanism during dry days. Meanwhile, during wet days, the prevailing aerobic conditions decreased the denitrification capacity of the wetland, and sedimentation of particulate organic nitrogen and particle-associated forms of nitrogen was increased.
Kong, W W; Zhang, C; Liu, F; Gong, A P; He, Y
2013-08-01
The objective of this study was to examine the possibility of applying visible and near-infrared spectroscopy to the quantitative detection of irradiation dose of irradiated milk powder. A total of 150 samples were used: 100 for the calibration set and 50 for the validation set. The samples were irradiated at 5 different dose levels in the dose range 0 to 6.0 kGy. Six different pretreatment methods were compared. The prediction results of full spectra given by linear and nonlinear calibration methods suggested that Savitzky-Golay smoothing and first derivative were suitable pretreatment methods in this study. Regression coefficient analysis was applied to select effective wavelengths (EW). Less than 10 EW were selected and they were useful for portable detection instrument or sensor development. Partial least squares, extreme learning machine, and least squares support vector machine were used. The best prediction performance was achieved by the EW-extreme learning machine model with first-derivative spectra, and correlation coefficients=0.97 and root mean square error of prediction=0.844. This study provided a new approach for the fast detection of irradiation dose of milk powder. The results could be helpful for quality detection and safety monitoring of milk powder. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Reliable noninvasive measurement of blood gases
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.; Alam, Mary K.
1994-01-01
Methods and apparatus for, preferably, determining noninvasively and in vivo at least two of the five blood gas parameters (i.e., pH, PCO.sub.2, [HCO.sub.3.sup.- ], PO.sub.2, and O.sub.2 sat.) in a human. The non-invasive method includes the steps of: generating light at three or more different wavelengths in the range of 500 nm to 2500 nm; irradiating blood containing tissue; measuring the intensities of the wavelengths emerging from the blood containing tissue to obtain a set of at least three spectral intensities v. wavelengths; and determining the unknown values of at least two of pH, [HCO.sub.3.sup.- ], PCO.sub.2 and a measure of oxygen concentration. The determined values are within the physiological ranges observed in blood containing tissue. The method also includes the steps of providing calibration samples, determining if the spectral intensities v. wavelengths from the tissue represents an outlier, and determining if any of the calibration samples represents an outlier. The determination of the unknown values is performed by at least one multivariate algorithm using two or more variables and at least one calibration model. Preferably, there is a separate calibration for each blood gas parameter being determined. The method can be utilized in a pulse mode and can also be used invasively. The apparatus includes a tissue positioning device, a source, at least one detector, electronics, a microprocessor, memory, and apparatus for indicating the determined values.
Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl
2017-09-01
Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Dynamic calibration approach for determining catechins and gallic acid in green tea using LC-ESI/MS.
Bedner, Mary; Duewer, David L
2011-08-15
Catechins and gallic acid are antioxidant constituents of Camellia sinensis, or green tea. Liquid chromatography with both ultraviolet (UV) absorbance and electrospray ionization mass spectrometric (ESI/MS) detection was used to determine catechins and gallic acid in three green tea matrix materials that are commonly used as dietary supplements. The results from both detection modes were evaluated with 14 quantitation models, all of which were based on the analyte response relative to an internal standard. Half of the models were static, where quantitation was achieved with calibration factors that were constant over an analysis set. The other half were dynamic, with calibration factors calculated from interpolated response factor data at each time a sample was injected to correct for potential variations in analyte response over time. For all analytes, the relatively nonselective UV responses were found to be very stable over time and independent of the calibrant concentration; comparable results with low variability were obtained regardless of the quantitation model used. Conversely, the highly selective MS responses were found to vary both with time and as a function of the calibrant concentration. A dynamic quantitation model based on polynomial data-fitting was used to reduce the variability in the quantitative results using the MS data.
Feldman, Betsy J.; Crane, Heidi M.; Mugavero, Michael; Willig, James H.; Patrick, Donald; Schumacher, Joseph; Saag, Michael; Kitahata, Mari M.; Crane, Paul K.
2011-01-01
Purpose We provide detailed instructions for analyzing patient-reported outcome (PRO) data collected with an existing (legacy) instrument so that scores can be calibrated to the PRO Measurement Information System (PROMIS) metric. This calibration facilitates migration to computerized adaptive test (CAT) PROMIS data collection, while facilitating research using historical legacy data alongside new PROMIS data. Methods A cross-sectional convenience sample (n = 2,178) from the Universities of Washington and Alabama at Birmingham HIV clinics completed the PROMIS short form and Patient Health Questionnaire (PHQ-9) depression symptom measures between August 2008 and December 2009. We calibrated the tests using item response theory. We compared measurement precision of the PHQ-9, the PROMIS short form, and simulated PROMIS CAT. Results Dimensionality analyses confirmed the PHQ-9 could be calibrated to the PROMIS metric. We provide code used to score the PHQ-9 on the PROMIS metric. The mean standard errors of measurement were 0.49 for the PHQ-9, 0.35 for the PROMIS short form, and 0.37, 0.28, and 0.27 for 3-, 8-, and 9-item-simulated CATs. Conclusions The strategy described here facilitated migration from a fixed-format legacy scale to PROMIS CAT administration and may be useful in other settings. PMID:21409516
Gibbons, Laura E; Feldman, Betsy J; Crane, Heidi M; Mugavero, Michael; Willig, James H; Patrick, Donald; Schumacher, Joseph; Saag, Michael; Kitahata, Mari M; Crane, Paul K
2011-11-01
We provide detailed instructions for analyzing patient-reported outcome (PRO) data collected with an existing (legacy) instrument so that scores can be calibrated to the PRO Measurement Information System (PROMIS) metric. This calibration facilitates migration to computerized adaptive test (CAT) PROMIS data collection, while facilitating research using historical legacy data alongside new PROMIS data. A cross-sectional convenience sample (n = 2,178) from the Universities of Washington and Alabama at Birmingham HIV clinics completed the PROMIS short form and Patient Health Questionnaire (PHQ-9) depression symptom measures between August 2008 and December 2009. We calibrated the tests using item response theory. We compared measurement precision of the PHQ-9, the PROMIS short form, and simulated PROMIS CAT. Dimensionality analyses confirmed the PHQ-9 could be calibrated to the PROMIS metric. We provide code used to score the PHQ-9 on the PROMIS metric. The mean standard errors of measurement were 0.49 for the PHQ-9, 0.35 for the PROMIS short form, and 0.37, 0.28, and 0.27 for 3-, 8-, and 9-item-simulated CATs. The strategy described here facilitated migration from a fixed-format legacy scale to PROMIS CAT administration and may be useful in other settings.
40 CFR 86.309-79 - Sampling and analytical system; schematic drawing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or parts of components that are wetted by the sample or corrosive calibration gases shall be either... must be within 2 inches of the analyzer entrance port. (vi) Calibration or span gases for the NOX... calibration gases. (ii) V2—optional heated selector valve to purge the sample probe, perform leak checks, or...
40 CFR 86.309-79 - Sampling and analytical system; schematic drawing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... or parts of components that are wetted by the sample or corrosive calibration gases shall be either... must be within 2 inches of the analyzer entrance port. (vi) Calibration or span gases for the NOX... calibration gases. (ii) V2—optional heated selector valve to purge the sample probe, perform leak checks, or...
Thermal infrared observations of Mars (7.5-12.8 microns) during the 1990 opposition
NASA Technical Reports Server (NTRS)
Roush, T. L.; Witteborn, F.; Lucy, P. G.; Graps, A.; Pollack, J. B.
1991-01-01
Thirteen spectra of Mars, in the 7.5 to 12.8 micron wavelength were obtained on 7 Dec. 1990 from the Infrared Telescope Facility (IRTF). For these observations, a grating with an ultimate resolving power of 120 to 250 was used and wavelengths were calibrated for each grating setting by comparison with the absorption spectrum of polystyrene measured prior to each set of observations. By sampling the Nyquist limit at the shortest wavelengths, an effective resolving power of about 120 over the entire wavelength range was achieved. A total of four grating settings were required to cover the entire wavelength region. A typical observing sequence consisted of: (1) positioning the grating in one of the intervals; (2) calibrating the wavelength of positions; and (3) obtaining spectra for a number of spots on Mars. Several observations of the nearby stellar standard star, alpha Tauri, were also acquired throughout the night. Each Mars spectrum represents an average of 4 to 6 measurements of the individual Mars spots. As a result of this observing sequence, the viewing geometry for a given location or spot on Mars does not change, but the actual location of the spot on Mars's surface varies somewhat between the different grating settings. Other aspects of the study are presented.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
First Demonstration of ECHO: an External Calibrator for Hydrogen Observatories
NASA Astrophysics Data System (ADS)
Jacobs, Daniel C.; Burba, Jacob; Bowman, Judd D.; Neben, Abraham R.; Stinnett, Benjamin; Turner, Lauren; Johnson, Kali; Busch, Michael; Allison, Jay; Leatham, Marc; Serrano Rodriguez, Victoria; Denney, Mason; Nelson, David
2017-03-01
Multiple instruments are pursuing constraints on dark energy, observing reionization and opening a window on the dark ages through the detection and characterization of the 21 cm hydrogen line for redshifts ranging from ˜1 to 25. These instruments, including CHIME in the sub-meter and HERA in the meter bands, are wide-field arrays with multiple-degree beams, typically operating in transit mode. Accurate knowledge of their primary beams is critical for separation of bright foregrounds from the desired cosmological signals, but difficult to achieve through astronomical observations alone. Previous beam calibration work at low frequencies has focused on model verification and does not address the need of 21 cm experiments for routine beam mapping, to the horizon, of the as-built array. We describe the design and methodology of a drone-mounted calibrator, the External Calibrator for Hydrogen Observatories (ECHO), that aims to address this need. We report on a first set of trials to calibrate low-frequency dipoles at 137 MHz and compare ECHO measurements to an established beam-mapping system based on transmissions from the Orbcomm satellite constellation. We create beam maps of two dipoles at a 9° resolution and find sample noise ranging from 1% at the zenith to 100% in the far sidelobes. Assuming this sample noise represents the error in the measurement, the higher end of this range is not yet consistent with the desired requirement but is an improvement on Orbcomm. The overall performance of ECHO suggests that the desired precision and angular coverage is achievable in practice with modest improvements. We identify the main sources of systematic error and uncertainty in our measurements and describe the steps needed to overcome them.
Investigation into low-level anti-rubella virus IgG results reported by commercial immunoassays.
Dimech, Wayne; Arachchi, Nilukshi; Cai, Jingjing; Sahin, Terri; Wilson, Kim
2013-02-01
Since the 1980s, commercial anti-rubella virus IgG assays have been calibrated against a WHO International Standard and results have been reported in international units per milliliter (IU/ml). Laboratories testing routine patients' samples collected 100 samples that gave anti-rubella virus IgG results of 40 IU/ml or less from each of five different commercial immunoassays (CIA). The total of 500 quantitative results obtained from 100 samples from each CIA were compared with results obtained from an in-house enzyme immunoassay (IH-EIA) calibrated using the WHO standard. All 500 samples were screened using a hemagglutination inhibition assay (HAI). Any sample having an HAI titer of 1:8 or less was assigned a negative anti-rubella virus antibody status. If the HAI titer was greater than 1:8, the sample was tested in an immunoblot (IB) assay. If the IB result was negative, the sample was assigned a negative anti-rubella virus IgG status; otherwise, the sample was assigned a positive status. Concordance between the CIA qualitative results and the assigned negative status ranged from 50.0 to 93.8% and 74.5 to 97.8% for the assigned positive status. Using a receiver operating characteristic analysis with the cutoff set at 10 IU/ml, the estimated sensitivity and specificity ranged from 70.2 to 91.2% and 65.9 to 100%, respectively. There was poor correlation between the quantitative CIA results and those obtained by the IH-EIA, with the coefficient of determination (R(2)) ranging from 0.002 to 0.413. Although CIAs have been calibrated with the same international standard for more than 2 decades, the level of standardization continues to be poor. It may be time for the scientific community to reevaluate the relevance of quantification of anti-rubella virus IgG.
Investigation into Low-Level Anti-Rubella Virus IgG Results Reported by Commercial Immunoassays
Arachchi, Nilukshi; Cai, Jingjing; Sahin, Terri; Wilson, Kim
2013-01-01
Since the 1980s, commercial anti-rubella virus IgG assays have been calibrated against a WHO International Standard and results have been reported in international units per milliliter (IU/ml). Laboratories testing routine patients' samples collected 100 samples that gave anti-rubella virus IgG results of 40 IU/ml or less from each of five different commercial immunoassays (CIA). The total of 500 quantitative results obtained from 100 samples from each CIA were compared with results obtained from an in-house enzyme immunoassay (IH-EIA) calibrated using the WHO standard. All 500 samples were screened using a hemagglutination inhibition assay (HAI). Any sample having an HAI titer of 1:8 or less was assigned a negative anti-rubella virus antibody status. If the HAI titer was greater than 1:8, the sample was tested in an immunoblot (IB) assay. If the IB result was negative, the sample was assigned a negative anti-rubella virus IgG status; otherwise, the sample was assigned a positive status. Concordance between the CIA qualitative results and the assigned negative status ranged from 50.0 to 93.8% and 74.5 to 97.8% for the assigned positive status. Using a receiver operating characteristic analysis with the cutoff set at 10 IU/ml, the estimated sensitivity and specificity ranged from 70.2 to 91.2% and 65.9 to 100%, respectively. There was poor correlation between the quantitative CIA results and those obtained by the IH-EIA, with the coefficient of determination (R2) ranging from 0.002 to 0.413. Although CIAs have been calibrated with the same international standard for more than 2 decades, the level of standardization continues to be poor. It may be time for the scientific community to reevaluate the relevance of quantification of anti-rubella virus IgG. PMID:23254301
Joyce, Richard; Kuziene, Viktorija; Zou, Xin; Wang, Xueting; Pullen, Frank; Loo, Ruey Leng
2016-01-01
An ultra-performance liquid chromatography quadrupole time of flight mass spectrometry (UPLC-qTOF-MS) method using hydrophilic interaction liquid chromatography was developed and validated for simultaneous quantification of 18 free amino acids in urine with a total acquisition time including the column re-equilibration of less than 18 min per sample. This method involves simple sample preparation steps which consisted of 15 times dilution with acetonitrile to give a final composition of 25 % aqueous and 75 % acetonitrile without the need of any derivatization. The dynamic range for our calibration curve is approximately two orders of magnitude (120-fold from the lowest calibration curve point) with good linearity (r (2) ≥ 0.995 for all amino acids). Good separation of all amino acids as well as good intra- and inter-day accuracy (<15 %) and precision (<15 %) were observed using three quality control samples at a concentration of low, medium and high range of the calibration curve. The limits of detection (LOD) and lower limit of quantification of our method were ranging from approximately 1-300 nM and 0.01-0.5 µM, respectively. The stability of amino acids in the prepared urine samples was found to be stable for 72 h at 4 °C, after one freeze thaw cycle and for up to 4 weeks at -80 °C. We have applied this method to quantify the content of 18 free amino acids in 646 urine samples from a dietary intervention study. We were able to quantify all 18 free amino acids in these urine samples, if they were present at a level above the LOD. We found our method to be reproducible (accuracy and precision were typically <10 % for QCL, QCM and QCH) and the relatively high sample throughput nature of this method potentially makes it a suitable alternative for the analysis of urine samples in clinical setting.
Reference measurement procedure for total glycerides by isotope dilution GC-MS.
Edwards, Selvin H; Stribling, Shelton L; Pyatt, Susan D; Kimberly, Mary M
2012-04-01
The CDC's Lipid Standardization Program established the chromotropic acid (CA) reference measurement procedure (RMP) as the accuracy base for standardization and metrological traceability for triglyceride testing. The CA RMP has several disadvantages, including lack of ruggedness. It uses obsolete instrumentation and hazardous reagents. To overcome these problems the CDC developed an isotope dilution GC-MS (ID-GC-MS) RMP for total glycerides in serum. We diluted serum samples with Tris-HCl buffer solution and spiked 200-μL aliquots with [(13)C(3)]-glycerol. These samples were incubated and hydrolyzed under basic conditions. The samples were dried, derivatized with acetic anhydride and pyridine, extracted with ethyl acetate, and analyzed by ID-GC-MS. Linearity, imprecision, and accuracy were evaluated by analyzing calibrator solutions, 10 serum pools, and a standard reference material (SRM 1951b). The calibration response was linear for the range of calibrator concentrations examined (0-1.24 mmol/L) with a slope and intercept of 0.717 (95% CI, 0.7123-0.7225) and 0.3122 (95% CI, 0.3096-0.3140), respectively. The limit of detection was 14.8 μmol/L. The mean %CV for the sample set (serum pools and SRM) was 1.2%. The mean %bias from NIST isotope dilution MS values for SRM 1951b was 0.7%. This ID-GC-MS RMP has the specificity and ruggedness to accurately quantify total glycerides in the serum pools used in the CDC's Lipid Standardization Program and demonstrates sufficiently acceptable agreement with the NIST primary RMP for total glyceride measurement.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Elsohaby, Ibrahim; Hou, Siyuan; McClure, J Trenton; Riley, Christopher B; Shaw, R Anthony; Keefe, Gregory P
2015-08-20
Following the recent development of a new approach to quantitative analysis of IgG concentrations in bovine serum using transmission infrared spectroscopy, the potential to measure IgG levels using technology and a device better designed for field use was investigated. A method using attenuated total reflectance infrared (ATR) spectroscopy in combination with partial least squares (PLS) regression was developed to measure bovine serum IgG concentrations. ATR spectroscopy has a distinct ease-of-use advantage that may open the door to routine point-of-care testing. Serum samples were collected from calves and adult cows, tested by a reference RID method, and ATR spectra acquired. The spectra were linked to the RID-IgG concentrations and then randomly split into two sets: calibration and prediction. The calibration set was used to build a calibration model, while the prediction set was used to assess the predictive performance and accuracy of the final model. The procedure was repeated for various spectral data preprocessing approaches. For the prediction set, the Pearson's and concordance correlation coefficients between the IgG measured by RID and predicted by ATR spectroscopy were both 0.93. The Bland Altman plot revealed no obvious systematic bias between the two methods. ATR spectroscopy showed a sensitivity for detection of failure of transfer of passive immunity (FTPI) of 88 %, specificity of 100 % and accuracy of 94 % (with IgG <1000 mg/dL as the FTPI cut-off value). ATR spectroscopy in combination with multivariate data analysis shows potential as an alternative approach for rapid quantification of IgG concentrations in bovine serum and the diagnosis of FTPI in calves.
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets.
Carrig, Madeline M; Manrique-Vallier, Daniel; Ranby, Krista W; Reiter, Jerome P; Hoyle, Rick H
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches.
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets
Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437
Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene
2010-11-29
The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).
A new method to measure electron density and effective atomic number using dual-energy CT images
NASA Astrophysics Data System (ADS)
Ramos Garcia, Luis Isaac; Pérez Azorin, José Fernando; Almansa, Julio F.
2016-01-01
The purpose of this work is to present a new method to extract the electron density ({ρ\\text{e}} ) and the effective atomic number (Z eff) from dual-energy CT images, based on a Karhunen-Loeve expansion (KLE) of the atomic cross section per electron. This method was used to calibrate a Siemens Definition CT using the CIRS phantom. The predicted electron density and effective atomic number using 80 kVp and 140 kVp were compared with a calibration phantom and an independent set of samples. The mean absolute deviations between the theoretical and calculated values for all the samples were 1.7 % ± 0.1 % for {ρ\\text{e}} and 4.1 % ± 0.3 % for Z eff. Finally, these results were compared with other stoichiometric method. The application of the KLE to represent the atomic cross section per electron is a promising method for calculating {ρ\\text{e}} and Z eff using dual-energy CT images.
Bortolussi, Silva; Ciani, Laura; Postuma, Ian; Protti, Nicoletta; Luca Reversi; Bruschi, Piero; Ferrari, Cinzia; Cansolino, Laura; Panza, Luigi; Ristori, Sandra; Altieri, Saverio
2014-06-01
The possibility to measure boron concentration with high precision in tissues that will be irradiated represents a fundamental step for a safe and effective BNCT treatment. In Pavia, two techniques have been used for this purpose, a quantitative method based on charged particles spectrometry and a boron biodistribution imaging based on neutron autoradiography. A quantitative method to determine boron concentration by neutron autoradiography has been recently set-up and calibrated for the measurement of biological samples, both solid and liquid, in the frame of the feasibility study of BNCT. This technique was calibrated and the obtained results were cross checked with those of α spectrometry, in order to validate them. The comparisons were performed using tissues taken form animals treated with different boron administration protocols. Subsequently the quantitative neutron autoradiography was employed to measure osteosarcoma cell samples treated with BPA and with new boronated formulations. © 2013 Published by Elsevier Ltd.
Logistic model of nitrate in streams of the upper-midwestern United States
Mueller, D.K.; Ruddy, B.C.; Battaglin, W.A.
1997-01-01
Nitrate in surface water can have adverse effects on aquatic life and, in drinking-water supplies, can be a risk to human health. As part of a regional study, nitrates as N (NO3-N) was analyzed in water samples collected from streams throughout 10 Midwestern states during synoptic surveys in 1989, 1990, and 1994. Data from the period immediately following crop planting at 124 sites were analyzed during logistic regression to relate discrete categories of NO3-N concentrations to characteristics of the basins upstream from the sites. The NO3-N data were divided into three categories representing probable background concentrations (10 mg L-1). Nitrate-N concentrations were positively correlated to streamflow, upstream area planted in corn (Zea mays L.), and upstream N- fertilizers application rates. Elevated NO3-N concentrations were associated with poorly drained soils and were weakly correlated with population density. Nitrate-N and streamflow data collected during 1989 and 1990 were used to calibrate the model, and data collected during 1994 were used for verification. The model correctly estimated NO3-N concentration categories for 79% of the samples in the calibration data set and 60% of the samples in the verification data set. The model was used to indicate where NO3-N concentrations might be elevated or exceed the NO3-N MCL in streams throughout the study area. The potential for elevated NO3-N concentrations was predicted to be greatest for streams in Illinois, Indiana, Iowa, and western Ohio.
Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun
This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.
A large-scale, long-term study of scale drift: The micro view and the macro view
NASA Astrophysics Data System (ADS)
He, W.; Li, S.; Kingsbury, G. G.
2016-11-01
The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas, Raul; Mimendia, Aitor; Valle, Manel del
2009-05-23
Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less
Space environment simulation and sensor calibration facility
NASA Astrophysics Data System (ADS)
Engelhart, Daniel P.; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V.; McGarity, John; Holeman, Ernest
2018-02-01
The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.
Space environment simulation and sensor calibration facility.
Engelhart, Daniel P; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V; McGarity, John; Holeman, Ernest
2018-02-01
The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.
Pattern sampling for etch model calibration
NASA Astrophysics Data System (ADS)
Weisbuch, François; Lutich, Andrey; Schatz, Jirka
2017-06-01
Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels as well as the choice of calibration patterns is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels -"internal, external, curvature, Gaussian, z_profile" - designed to capture the finest details of the resist contours and represent precisely any etch bias. By evaluating the etch kernels on various structures it is possible to map their etch signatures in a multi-dimensional space and analyze them to find an optimal sampling of structures to train an etch model. The method was specifically applied to a contact layer containing many different geometries and was used to successfully select appropriate calibration structures. The proposed kernels evaluated on these structures were combined to train an etch model significantly better than the standard one. We also illustrate the usage of the specific kernel "z_profile" which adds a third dimension to the description of the resist profile.
Plug-and-play, infrared, laser-mediated PCR in a microfluidic chip.
Pak, Nikita; Saunders, D Curtis; Phaneuf, Christopher R; Forest, Craig R
2012-04-01
Microfluidic polymerase chain reaction (PCR) systems have set milestones for small volume (100 nL-5 μL), amplification speed (100-400 s), and on-chip integration of upstream and downstream sample handling including purification and electrophoretic separation functionality. In practice, the microfluidic chips in these systems require either insertion of thermocouples or calibration prior to every amplification. These factors can offset the speed advantages of microfluidic PCR and have likely hindered commercialization. We present an infrared, laser-mediated, PCR system that features a single calibration, accurate and repeatable precision alignment, and systematic thermal modeling and management for reproducible, open-loop control of PCR in 1 μL chambers of a polymer microfluidic chip. Total cycle time is less than 12 min: 1 min to fill and seal, 10 min to amplify, and 1 min to recover the sample. We describe the design, basis for its operation, and the precision engineering in the system and microfluidic chip. From a single calibration, we demonstrate PCR amplification of a 500 bp amplicon from λ-phage DNA in multiple consecutive trials on the same instrument as well as multiple identical instruments. This simple, relatively low-cost plug-and-play design is thus accessible to persons who may not be skilled in assembly and engineering.
NASA Astrophysics Data System (ADS)
Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo
2013-03-01
Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Use of Naturally Available Reference Targets to Calibrate Airborne Laser Scanning Intensity Data
Vain, Ants; Kaasalainen, Sanna; Pyysalo, Ulla; Krooks, Anssi; Litkey, Paula
2009-01-01
We have studied the possibility of calibrating airborne laser scanning (ALS) intensity data, using land targets typically available in urban areas. For this purpose, a test area around Espoonlahti Harbor, Espoo, Finland, for which a long time series of ALS campaigns is available, was selected. Different target samples (beach sand, concrete, asphalt, different types of gravel) were collected and measured in the laboratory. Using tarps, which have certain backscattering properties, the natural samples were calibrated and studied, taking into account the atmospheric effect, incidence angle and flying height. Using data from different flights and altitudes, a time series for the natural samples was generated. Studying the stability of the samples, we could obtain information on the most ideal types of natural targets for ALS radiometric calibration. Using the selected natural samples as reference, the ALS points of typical land targets were calibrated again and examined. Results showed the need for more accurate ground reference data, before using natural samples in ALS intensity data calibration. Also, the NIR camera-based field system was used for collecting ground reference data. This system proved to be a good means for collecting in situ reference data, especially for targets with inhomogeneous surface reflection properties. PMID:22574045
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal
2014-02-24
On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.
Linear and nonlinear trending and prediction for AVHRR time series data
NASA Technical Reports Server (NTRS)
Smid, J.; Volf, P.; Slama, M.; Palus, M.
1995-01-01
The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
Ecological Risk Assessment of Explosive Residues in Rodents, Reptiles, Amphibians, and Fish
2004-03-01
oligonucleotide primers were designed according to the sequence for pendrin in Mus musculus . PCR was carried out using a Failsafe kit (Epicentre, WI). PCR...Project No. T9700 PERCHLORATE ANALYTICAL Phase V As a calibration curve is run each time a set of samples is analyzed, we routinely include an... Reset FINAL REPORT FY2002 SERDP Project: ER-1235 TABLE OF CONTENTS Topic Page IDENTIFICATION OF PERCHLORATE-CONTAMINATED AND REFERENCE SITES
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C
2001-06-01
This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
Application of Handheld Laser-Induced Breakdown Spectroscopy (LIBS) to Geochemical Analysis.
Connors, Brendan; Somers, Andrew; Day, David
2016-05-01
While laser-induced breakdown spectroscopy (LIBS) has been in use for decades, only within the last two years has technology progressed to the point of enabling true handheld, self-contained instruments. Several instruments are now commercially available with a range of capabilities and features. In this paper, the SciAps Z-500 handheld LIBS instrument functionality and sub-systems are reviewed. Several assayed geochemical sample sets, including igneous rocks and soils, are investigated. Calibration data are presented for multiple elements of interest along with examples of elemental mapping in heterogeneous samples. Sample preparation and the data collection method from multiple locations and data analysis are discussed. © The Author(s) 2016.
30 CFR 90.203 - Certified person; maintenance and calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-COAL MINERS WHO HAVE EVIDENCE OF THE DEVELOPMENT OF PNEUMOCONIOSIS Sampling Procedures § 90.203 Certified person; maintenance and calibration. (a) Approved sampling devices shall be maintained and calibrated by a certified person. (b) To be certified, a...
NASA Astrophysics Data System (ADS)
Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre
2018-05-01
Changes in global soil carbon stocks have considerable potential to influence the course of future climate change. However, a portion of soil organic carbon (SOC) has a very long residence time ( > 100 years) and may not contribute significantly to terrestrial greenhouse gas emissions during the next century. The size of this persistent SOC reservoir is presumed to be large. Consequently, it is a key parameter required for the initialization of SOC dynamics in ecosystem and Earth system models, but there is considerable uncertainty in the methods used to quantify it. Thermal analysis methods provide cost-effective information on SOC thermal stability that has been shown to be qualitatively related to SOC biogeochemical stability. The objective of this work was to build the first quantitative model of the size of the centennially persistent SOC pool based on thermal analysis. We used a unique set of 118 archived soil samples from four agronomic experiments in northwestern Europe with long-term bare fallow and non-bare fallow treatments (e.g., manure amendment, cropland and grassland) as a sample set for which estimating the size of the centennially persistent SOC pool is relatively straightforward. At each experimental site, we estimated the average concentration of centennially persistent SOC and its uncertainty by applying a Bayesian curve-fitting method to the observed declining SOC concentration over the duration of the long-term bare fallow treatment. Overall, the estimated concentrations of centennially persistent SOC ranged from 5 to 11 g C kg-1 of soil (lowest and highest boundaries of four 95 % confidence intervals). Then, by dividing the site-specific concentrations of persistent SOC by the total SOC concentration, we could estimate the proportion of centennially persistent SOC in the 118 archived soil samples and the associated uncertainty. The proportion of centennially persistent SOC ranged from 0.14 (standard deviation of 0.01) to 1 (standard deviation of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
Soil specific re-calibration of water content sensors for a field-scale sensor network
NASA Astrophysics Data System (ADS)
Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.
2015-04-01
Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting point) were represented in the sensor readings. We anticipate that obtaining water retention curves for field soils will improve the re-calibration accuracy by providing more precise estimates of saturation, field capacity, and wilting point. This approach may serve as an alternative method for sensor calibration in lieu of or to complement pre-installation calibration.
CALIPSO lidar calibration at 532 nm: version 4 nighttime algorithm
NASA Astrophysics Data System (ADS)
Kar, Jayanta; Vaughan, Mark A.; Lee, Kam-Pui; Tackett, Jason L.; Avery, Melody A.; Garnier, Anne; Getzewich, Brian J.; Hunt, William H.; Josset, Damien; Liu, Zhaoyan; Lucker, Patricia L.; Magill, Brian; Omar, Ali H.; Pelon, Jacques; Rogers, Raymond R.; Toth, Travis D.; Trepte, Charles R.; Vernier, Jean-Paul; Winker, David M.; Young, Stuart A.
2018-03-01
Data products from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on board Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) were recently updated following the implementation of new (version 4) calibration algorithms for all of the Level 1 attenuated backscatter measurements. In this work we present the motivation for and the implementation of the version 4 nighttime 532 nm parallel channel calibration. The nighttime 532 nm calibration is the most fundamental calibration of CALIOP data, since all of CALIOP's other radiometric calibration procedures - i.e., the 532 nm daytime calibration and the 1064 nm calibrations during both nighttime and daytime - depend either directly or indirectly on the 532 nm nighttime calibration. The accuracy of the 532 nm nighttime calibration has been significantly improved by raising the molecular normalization altitude from 30-34 km to the upper possible signal acquisition range of 36-39 km to substantially reduce stratospheric aerosol contamination. Due to the greatly reduced molecular number density and consequently reduced signal-to-noise ratio (SNR) at these higher altitudes, the signal is now averaged over a larger number of samples using data from multiple adjacent granules. Additionally, an enhanced strategy for filtering the radiation-induced noise from high-energy particles was adopted. Further, the meteorological model used in the earlier versions has been replaced by the improved Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), model. An aerosol scattering ratio of 1.01 ± 0.01 is now explicitly used for the calibration altitude. These modifications lead to globally revised calibration coefficients which are, on average, 2-3 % lower than in previous data releases. Further, the new calibration procedure is shown to eliminate biases at high altitudes that were present in earlier versions and consequently leads to an improved representation of stratospheric aerosols. Validation results using airborne lidar measurements are also presented. Biases relative to collocated measurements acquired by the Langley Research Center (LaRC) airborne High Spectral Resolution Lidar (HSRL) are reduced from 3.6 % ± 2.2 % in the version 3 data set to 1.6 % ± 2.4 % in the version 4 release.
Optical laboratory facilities at the Finnish Meteorological Institute - Arctic Research Centre
NASA Astrophysics Data System (ADS)
Lakkala, Kaisa; Suokanerva, Hanne; Matti Karhu, Juha; Aarva, Antti; Poikonen, Antti; Karppinen, Tomi; Ahponen, Markku; Hannula, Henna-Reetta; Kontu, Anna; Kyrö, Esko
2016-07-01
This paper describes the laboratory facilities at the Finnish Meteorological Institute - Arctic Research Centre (FMI-ARC, http://fmiarc.fmi.fi). They comprise an optical laboratory, a facility for biological studies, and an office. A dark room has been built, in which an optical table and a fixed lamp test system are set up, and the electronics allow high-precision adjustment of the current. The Brewer spectroradiometer, NILU-UV multifilter radiometer, and Analytical Spectral Devices (ASD) spectroradiometer of the FMI-ARC are regularly calibrated or checked for stability in the laboratory. The facilities are ideal for responding to the needs of international multidisciplinary research, giving the possibility to calibrate and characterize the research instruments as well as handle and store samples.
Electronically scanned pressure sensor module with in SITU calibration capability
NASA Technical Reports Server (NTRS)
Gross, C. (Inventor)
1978-01-01
This high data rate pressure sensor module helps reduce energy consumption in wind tunnel facilities without loss of measurement accuracy. The sensor module allows for nearly a two order of magnitude increase in data rates over conventional electromechanically scanned pressure sampling techniques. The module consists of 16 solid state pressure sensor chips and signal multiplexing electronics integrally mounted to a four position pressure selector switch. One of the four positions of the pressure selector switch allows the in situ calibration of the 16 pressure sensors; the three other positions allow 48 channels (three sets of 16) pressure inputs to be measured by the sensors. The small size of the sensor module will allow mounting within many wind tunnel models, thus eliminating long tube lengths and their corresponding slow pressure response.
Computerized tomography calibrator
NASA Technical Reports Server (NTRS)
Engel, Herbert P. (Inventor)
1991-01-01
A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.
Planktonic Foraminifera Proxies Calibration Off the NW Iberian Margin: Nutrients Approach
NASA Astrophysics Data System (ADS)
Salgueiro, E.; Castro, C. G.; Zuniga, D.; Martin, P. A.; Groeneveld, J.; de la Granda, F.; Villaceiros-Robineau, N.; Alonso-Perez, F.; Alberto, A.; Rodrigues, T.; Rufino, M. M.; Abrantes, F. F. G.; Voelker, A. H. L.
2014-12-01
Planktonic foraminifera (PF) shells preserved in marine sediments are a useful tool to reconstruct productivity conditions at different geological timescales. However, the accuracy of these paleoreconstructions depends on the data set and calibration quality. Several calibration works have been defining and improving the use of proxies for productivity and nutrient cycling parameters. Our contribution is centred on a multi-proxy calibration at a regional coastal upwelling system. To minimize the existing uncertainties affecting the use of trace elements and C stable isotopes as productivity proxy in the high productivity upwelling areas, we investigate the content and distribution of Ba/Ca and δ13C in the water column, its transference into the planktonic foraminifera shells, and, how the living planktonic foraminifera Ba/Ca and δ13C signal is related to the same planktonic foraminiferal species preserved in the sediment record. This study is based on a large data set from two stations (RAIA - 75m water depth, and CALIBERIA - 350m water depth) located off the NW Iberian margin (41.5-42.5ºN; 9-10ºW), and includes: i) two year monthly water column data (temperature, salinity, nutrients, chlorophyll a, Ba/Ca, and δ13C-DIC); ii) seasonal Ba/Ca, δ13C in several living PF species at both stations; and iii) Ba/Ca and δ13C in several PF species from a large set of core-top sediment samples in the study region. Additionally, total organic carbon and total alkenones were also measured in the sediment. Our results showed the link between productivity proxies in the surface sediment foraminifera assemblage and the processes regulating the actual phytoplankton dynamics in an upwelling area. The understanding of this relationship has special relevance since it gives fundamental information related to the past oceanic biogeochemistry and/or climate and improves the prevision of future changes against possible climate variability due to anthropogenic forcing.
Shinozaki, Takashi; Watanabe, Ryuichi; Kawatsu, Kentaro; Sakurada, Kiyonari; Takahi, Shinya; Ueno, Ken-ichi; Matsushima, Ryoji; Suzuki, Toshiyuki
2013-01-01
We investigated the applicability of enzyme-linked immunosorbent assay (PSP-ELISA) using a monoclonal antibody against paralytic shellfish toxins (PST) for screening oysters collected at several coastal areas in Kumamoto prefecture, Japan. Oysters collected between 2007 and 2010 were analyzed by PSP-ELISA. As an alternative calibrant, a naturally contaminated oyster extract was used to quantify toxins in the oyster samples. The toxicity of the calibrant oyster extract determined by the official testing method, mouse bioassay (MBA), was 4 MU/g. Oyster samples collected over 3 years showed a similar toxin profile to the alternative standard, resulting in good agreement between the PSP-ELISA and the MBA. The PSP-ELISA method was better than the MBA in terms of sensitivity, indicating that it may be useful for earlier warning of contamination of oysters by PST in the distinct coastal areas. To use the PSP-ELISA as a screening method prior to MBA, we finally set a screening level at 2 MU/g PSP-ELISA for oyster monitoring in Kumamoto prefecture. We confirmed that there were on samples exceeding the quarantine level (4 MU/g) in MBA among samples quantified as below the screening level by the PSP-ELISA. It was concluded that the use of PSP-ELISA could reduce the numbers of animals needed for MBA testing.
Zhang, Lin; Small, Gary W; Arnold, Mark A
2003-11-01
The transfer of multivariate calibration models is investigated between a primary (A) and two secondary Fourier transform near-infrared (near-IR) spectrometers (B, C). The application studied in this work is the use of bands in the near-IR combination region of 5000-4000 cm(-)(1) to determine physiological levels of glucose in a buffered aqueous matrix containing varying levels of alanine, ascorbate, lactate, triacetin, and urea. The three spectrometers are used to measure 80 samples produced through a randomized experimental design that minimizes correlations between the component concentrations and between the concentrations of glucose and water. Direct standardization (DS), piecewise direct standardization (PDS), and guided model reoptimization (GMR) are evaluated for use in transferring partial least-squares calibration models developed with the spectra of 64 samples from the primary instrument to the prediction of glucose concentrations in 16 prediction samples measured with each secondary spectrometer. The three algorithms are evaluated as a function of the number of standardization samples used in transferring the calibration models. Performance criteria for judging the success of the calibration transfer are established as the standard error of prediction (SEP) for internal calibration models built with the spectra of the 64 calibration samples collected with each secondary spectrometer. These SEP values are 1.51 and 1.14 mM for spectrometers B and C, respectively. When calibration standardization is applied, the GMR algorithm is observed to outperform DS and PDS. With spectrometer C, the calibration transfer is highly successful, producing an SEP value of 1.07 mM. However, an SEP of 2.96 mM indicates unsuccessful calibration standardization with spectrometer B. This failure is attributed to differences in the variance structure of the spectra collected with spectrometers A and B. Diagnostic procedures are presented for use with the GMR algorithm that forecasts the successful calibration transfer with spectrometer C and the unsatisfactory results with spectrometer B.
Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M
2007-03-07
Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.
Calibration of GRB Luminosity Relations with Cosmography
NASA Astrophysics Data System (ADS)
Gao, He; Liang, Nan; Zhu, Zong-Hong
For the use of gamma-ray bursts (GRBs) to probe cosmology in a cosmology-independent way, a new method has been proposed to obtain luminosity distances of GRBs by interpolating directly from the Hubble diagram of SNe Ia, and then calibrating GRB relations at high redshift. In this paper, following the basic assumption in the interpolation method that objects at the same redshift should have the same luminosity distance, we propose another approach to calibrate GRB luminosity relations with cosmographic fitting directly from SN Ia data. In cosmography, there is a well-known fitting formula which can reflect the Hubble relation between luminosity distance and redshift with cosmographic parameters which can be fitted from observation data. Using the Cosmographic fitting results from the Union set of SNe Ia, we calibrate five GRB relations using GRB sample at z ≤ 1.4 and deduce distance moduli of GRBs at 1.4 < z ≤ 6.6 by generalizing above calibrated relations at high redshift. Finally, we constrain the dark energy parameterization models of the Chevallier-Polarski-Linder (CPL) model, the Jassal-Bagla-Padmanabhan (JBP) model and the Alam model with GRB data at high redshift, as well as with the cosmic microwave background radiation (CMB) and the baryonic acoustic oscillation (BAO) observations, and we find the ΛCDM model is consistent with the current data in 1-σ confidence region.
High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system
NASA Astrophysics Data System (ADS)
Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng
2017-09-01
Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.
ATLAS Tile Calorimeter calibration and monitoring systems
NASA Astrophysics Data System (ADS)
Cortés-González, Arely
2018-01-01
The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. Neutral particles may also produce a signal after interacting with the material and producing charged particles. The readout is segmented into about 5000 cells, each of them being read out by two photomultipliers in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. This comprises Cesium radioactive sources, Laser, charge injection elements and an integrator based readout system. Information from all systems allows to monitor and equalise the calorimeter response at each stage of the signal production, from scintillation light to digitisation. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. The data quality efficiency achieved during 2016 was 98.9%. These calibration and stability of the calorimeter reported here show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.
NASA Astrophysics Data System (ADS)
Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.
2008-12-01
In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.
Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng
2014-12-01
The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.
Galileo SSI/Ida Radiometrically Calibrated Images V1.0
NASA Astrophysics Data System (ADS)
Domingue, D. L.
2016-05-01
This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Task Identification and Evaluation System (TIES)
1991-08-01
Caliorate A N/AVh-11A- iUD -test -sets 127. Calibrate AN/AWII1-55 ASCU test setsI - 128. Calibrate 5001L11 tally punched tape readersI- 129. Perform...11AKHbD test sets -- 132. ?erform fault isolation of U4/AWN-55 ASCU -test sets -- 133. Perform fault isolation of 500 R.M tally punched tape I...AIN/AVM1-11A HfLM test sets- 137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS 138. Adjust SM-661/AS-388air
Oceanic Whitecaps and Associated, Bubble-Mediated, Air-Sea Exchange Processes
1992-10-01
experiments performed in laboratory conditions using Air-Sea Exchange Monitoring System (A-SEMS). EXPERIMENTAL SET-UP In a first look, the Air-Sea Exchange...Model 225, equipped with a Model 519 plug-in module. Other complementary information on A-SEMS along with results from first tests and calibration...between 9.50C and 22.40C within the first 24 hours after transferring the water sample into laboratory conditions. The results show an enhancement of
A multi-objective approach to improve SWAT model calibration in alpine catchments
NASA Astrophysics Data System (ADS)
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
NASA Astrophysics Data System (ADS)
Black, D. E.; Abahazi, M. A.; Thunell, R. C.; Tappa, E. J.
2005-12-01
Most geochemical paleoclimate proxies are calibrated to different climate variables using laboratory culture, surface sediment, or sediment trap experiments. The varved, high-deposition rate sediments of the Cariaco Basin (Venezuela) provide the nearly unique opportunity to compare and calibrate paleoceanographic proxy data directly against true oceanic historical instrumental climate records. Here we present one of the first sediment-derived foraminiferal-Mg/Ca to SST calibrations spanning A. D. 1870-1990. The record of Mg/Ca-estimated tropical North Atlantic SSTs is then extended back to approximately A. D. 1200. Box core PL07-73 BC, recovered from the northeastern slope of Cariaco Basin, was sampled at consecutive 1 mm increments and processed for foraminiferal population, stable isotope, and Mg/Ca (by ICP-AES) analyses. The age model for this core was established by correlating faunal population records from PL07-73 to a nearby very well-dated Cariaco Basin box core, PL07-71 BC. The resulting age model yields consecutive sample intervals of one to two years. Mg/Ca ratios measured on Globigerina bulloides in samples deposited between A. D. 1870 and 1990 were calibrated to monthly SSTs from the Met Office Hadley Centre's SST data set for the Cariaco Basin grid square. Annual correlations between G. bulloides Mg/Ca and instrumental SST were highest (r=0.6, p<.0001, n=120) for the months of March, April, and May, the time when sediment trap studies indicate G. bulloides is most abundant in the basin. The full-length Mg/Ca-estimated SST record is characterized by decadal- and centennial-scale variability. The tropical western North Atlantic does not appear to have experienced a pronounced Medieval Warm Period relative to the complete record. However, strong Little Ice Age cooling of as much as 3 ° C occurred between A. D. 1525 and 1625. Spring SSTs gradually rose between A. D. 1650 and 1900 followed by a 2.5 ° C warming over the 20th century.
NASA Astrophysics Data System (ADS)
Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.
2015-07-01
The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the typical measurement uncertainties from in situ point-scale sensors and satellite remote sensing products. Nevertheless, at the two humid sites, reduction in uncertainty with increasing sampling days only reached typical errors associated with satellite remote sensing products. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.
Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems
NASA Astrophysics Data System (ADS)
Khane, Vaibhav; Al-Dahhan, Muthanna H.
2017-04-01
The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.
NASA Astrophysics Data System (ADS)
Khrustalev, K.
2016-12-01
Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.
NASA Astrophysics Data System (ADS)
Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin
2018-05-01
The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.
Pareja, Jhon; López, Sebastian; Jaramillo, Daniel; Hahn, David W; Molina, Alejandro
2013-04-10
The performances of traditional laser-induced breakdown spectroscopy (LIBS) and laser ablation-LIBS (LA-LIBS) were compared by quantifying the total elemental concentration of potassium in highly heterogeneous solid samples, namely soils. Calibration curves for a set of fifteen samples with a wide range of potassium concentrations were generated. The LA-LIBS approach produced a superior linear response different than the traditional LIBS scheme. The analytical response of LA-LIBS was tested with a large set of different soil samples for the quantification of the total concentration of Fe, Mn, Mg, Ca, Na, and K. Results showed an acceptable linear response for Ca, Fe, Mg, and K while poor signal responses were found for Na and Mn. Signs of remaining matrix effects for the LA-LIBS approach in the case of soil analysis were found and discussed. Finally, some improvements and possibilities for future studies toward quantitative soil analysis with the LA-LIBS technique are suggested.
NASA Astrophysics Data System (ADS)
Romero-Dávila, E.; Miranda, J.; Pineda, J. C.
2015-07-01
Elemental analyses of samples of Mexican varieties of dried chili peppers were carried out using X-ray Fluorescence (XRF). Several specimens of Capsicum annuum L., Capsicum chinense, and Capsicum pubescens were analyzed and the results compared to previous studies of elemental contents in other varieties of Capsicum annuum (ancho, morita, chilpotle, guajillo, pasilla, and árbol). The first set of samples was bought packaged in markets. In the present work, the study focuses on home-grown samples of the árbol and chilpotle varieties, commercial habanero (Capsicum chinense), as well as commercial and home-grown specimens of manzano (Capsicum pubescencs). Samples were freeze dried and pelletized. XRF analyses were carried out using a spectrometer based on an Rh X-ray tube, using a Si-PIN detector. The system detection calibration was performed through the analysis of the NIST certified reference materials 1547 (peach leaves) and 1574 (tomato leaves), while accuracy was checked with the reference material 1571 (orchard leaves). Elemental contents of all elements in the new set of samples were similar to those of the first group. Nevertheless, it was found that commercial samples contain high amounts of Br, while home-grown varieties do not.
The Role of Feedback on Studying, Achievement and Calibration.
ERIC Educational Resources Information Center
Chu, Stephanie T. L.; Jamieson-Noel, Dianne L.; Winne, Philip H.
One set of hypotheses examined in this study was that various types of feedback (outcome, process, and corrective) supply different information about performance and have different effects on studying processes and on achievement. Another set of hypotheses concerned students' calibration, their accuracy in predicting and postdicting achievement…
Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda
2017-05-04
Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (R s ) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the R s value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for R s values estimated in laboratory conditions.
Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo
2013-03-15
Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications. Copyright © 2012 Elsevier B.V. All rights reserved.
The lick-index calibration of the Gemini multi-object spectrographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzia, Thomas H.; Miller, Bryan W.; Trancho, Gelys
2013-06-01
We present the calibration of the spectroscopic Lick/IDS standard line-index system for measurements obtained with the Gemini Multi-Object Spectrographs known as GMOS-North and GMOS-South. We provide linear correction functions for each of the 25 standard Lick line indices for the B600 grism and two instrumental setups, one with 0.''5 slit width and 1 × 1 CCD pixel binning (corresponding to ∼2.5 Å spectral resolution) and the other with 0.''75 slit width and 2 × 2 binning (∼4 Å). We find small and well-defined correction terms for the set of Balmer indices Hβ, Hγ {sub A}, and Hδ {sub A} alongmore » with the metallicity sensitive indices Fe5015, Fe5270, Fe5335, Fe5406, Mg{sub 2}, and Mgb that are widely used for stellar population diagnostics of distant stellar systems. We find other indices that sample molecular absorption bands, such as TiO{sub 1} and TiO{sub 2}, with very wide wavelength coverage or indices that sample very weak molecular and atomic absorption features, such as Mg{sub 1}, as well as indices with particularly narrow passband definitions, such as Fe4384, Ca4455, Fe4531, Ca4227, and Fe5782, which are less robustly calibrated. These indices should be used with caution.« less
The recalibration of the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher
1988-01-01
The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.
Müller, Christoph; Vetter, Florian; Richter, Elmar; Bracher, Franz
2014-02-01
The occurrence of the bioactive components caffeine (xanthine alkaloid), myosmine and nicotine (pyridine alkaloids) in different edibles and plants is well known, but the content of myosmine and nicotine is still ambiguous in milk/dark chocolate. Therefore, a sensitive method for determination of these components was established, a simple separation of the dissolved analytes from the matrix, followed by headspace solid-phase microextraction coupled with gas chromatography-tandem mass spectrometry (HS-SPME-GC-MS/MS). This is the first approach for simultaneous determination of caffeine, myosmine, and nicotine with a convenient SPME technique. Calibration curves were linear for the xanthine alkaloid (250 to 3000 mg/kg) and the pyridine alkaloids (0.000125 to 0.003000 mg/kg). Residuals of the calibration curves were lower than 15%, hence the limits of detection were set as the lowest points of the calibration curves. The limits of detection calculated from linearity data were for caffeine 216 mg/kg, for myosmine 0.000110 mg/kg, and for nicotine 0.000120 mg/kg. Thirty samples of 5 chocolate brands with varying cocoa contents (30% to 99%) were analyzed in triplicate. Caffeine and nicotine were detected in all samples of chocolate, whereas myosmine was not present in any sample. The caffeine content ranged from 420 to 2780 mg/kg (relative standard deviation 0.1 to 11.5%) and nicotine from 0.000230 to 0.001590 mg/kg (RSD 2.0 to 22.1%). © 2014 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.
2018-06-01
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
Davis, C.; Rozo, E.; Roodman, A.; ...
2018-03-26
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, C.; Rozo, E.; Roodman, A.
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less
Dantas, B M; Lucena, E A; Dantas, A L A; Santos, M S; Julião, L Q C; Melo, D R; Sousa, W O; Fernandes, P C; Mesquita, S A
2010-10-01
Internal exposures may occur in nuclear power plants, radioisotope production, and in medicine and research laboratories. Such practices require quick response in case of accidents of a wide range of magnitudes. This work presents the design and calibration of a mobile laboratory for the assessment of accidents involving workers and the population as well as for routine monitoring. The system was set up in a truck with internal dimensions of 3.30 m × 1.60 m × 1.70 m and can identify photon emitters in the energy range of 100-3,000 keV in the whole body, organs, and in urine. A thyroid monitor consisting of a lead-collimated NaI(Tl)3" × 3" (7.62 × 7.62 cm) detector was calibrated with a neck-thyroid phantom developed at the IRD (Instituto de Radioproteção e Dosimetria). Whole body measurements were performed with a NaI(Tl)8" × 4" (20.32 × 10.16 cm) detector calibrated with a plastic-bottle phantom. Urine samples were measured with another NaI(Tl) 3" × 3" (7.62 × 7.62 cm) detector set up in a steel support. Standard solutions were provided by the National Laboratory for Metrology of Ionizing Radiation of the IRD. Urine measurements are based on a calibration of efficiency vs. energy for standard volumes. Detection limits were converted to minimum committed effective doses for the radionuclides of interest using standard biokinetic and dosimetric models in order to evaluate the applicability and limitations of the system. Sensitivities for high-energy activation and fission products show that the system is suitable for use in emergency and routine monitoring of individuals under risk of internal exposure by such radionuclides.
Multi-proxy experimental calibration in cold water corals for high resolution paleoreconstructions
NASA Astrophysics Data System (ADS)
Pelejero, C.; Martínez-Dios, A.; Ko, S.; Sherrell, R. M.; Kozdon, R.; López-Sanz, À.; Calvo, E.
2017-12-01
Cold-water corals (CWCs) display an almost cosmopolitan distribution over a wide range of depths. Similar to their tropical counterparts, they can provide continuous, high-resolution records of up to a century or more. Several CWC elemental and isotopic ratios have been suggested as useful proxies, but robust calibrations under controlled conditions in aquaria are needed. Whereas a few such calibrations have been performed for tropical corals, they are still pending for CWCs. This reflects the technical challenges involved in maintaining these slow-growing animals alive during the long-term experiments required to achieve sufficient skeletal growth for geochemical analyses. We will show details of the set up and initial stages of a long-term experiment being run at the ICM (Barcelona), where live specimens (>150) of Desmophyllum dianthus sampled in Comau Fjord (Chile) are kept under controlled and manipulated physical chemistry (temperature, pH, phosphate, barium, cadmium) and feeding conditions. With this set up, we aim to calibrate experimentally several specific elemental ratios including P/Ca, Ba/Ca, Cd/Ca, B/Ca, U/Ca and Mg/Li as proxies of nutrients dynamics, pH, carbonate ion concentration and temperature. For the trace element analysis, we are analyzing coral skeletons using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS), running quantitative analyses on spot sizes of tens of microns, and comparing to micromilling and solution ICP-MS. Preliminary data obtained using these techniques will be presented, as well as measurements of calcification rate. Since coral-water corals are potentially vulnerable to ocean acidification, the same experiment is being exploited to assess potential effects of the pH stressor in D. dianthus; main findings to date will be summarized.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara
2017-07-19
In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.
NASA Astrophysics Data System (ADS)
Vaudour, Emmanuelle; Gilliot, Jean-Marc; Bel, Liliane; Lefevre, Josias; Chehdi, Kacem
2016-04-01
This study was carried out in the framework of the TOSCA-PLEIADES-CO of the French Space Agency and benefited data from the earlier PROSTOCK-Gessol3 project supported by the French Environment and Energy Management Agency (ADEME). It aimed at identifying the potential of airborne hyperspectral visible near-infrared AISA-Eagle data for predicting the topsoil organic carbon (SOC) content of bare cultivated soils over a large peri-urban area (221 km2) with intensive annual crop cultivation and both contrasted soils and SOC contents, located in the western region of Paris, France. Soils comprise hortic or glossic luvisols, calcaric, rendzic cambisols and colluvic cambisols. Airborne AISA-Eagle images (400-1000 nm, 126 bands) with 1 m-resolution were acquired on 17 April 2013 over 13 tracks. Tracks were atmospherically corrected then mosaicked at a 2 m-resolution using a set of 24 synchronous field spectra of bare soils, black and white targets and impervious surfaces. The land use identification system layer (RPG) of 2012 was used to mask non-agricultural areas, then calculation and thresholding of NDVI from an atmospherically corrected SPOT4 image acquired the same day enabled to map agricultural fields with bare soil. A total of 101 sites, which were sampled either at the regional scale or within one field, were identified as bare by means of this map. Predictions were made from the mosaic AISA spectra which were related to SOC contents by means of partial least squares regression (PLSR). Regression robustness was evaluated through a series of 1000 bootstrap data sets of calibration-validation samples, considering those 75 sites outside cloud shadows only, and different sampling strategies for selecting calibration samples. Validation root-mean-square errors (RMSE) were comprised between 3.73 and 4.49 g. Kg-1 and were ~4 g. Kg-1 in median. The most performing models in terms of coefficient of determination (R²) and Residual Prediction Deviation (RPD) values were the calibration models derived either from Kennard-Stone or conditioned Latin Hypercube sampling on smoothed spectra. However, the most generalizable model leading to lowest RMSE value of 3.73 g. Kg-1 at the regional scale and 1.44 g. Kg-1 at the within-field scale and low validation bias was the cross-validated leave-one-out PLSR model constructed with the 28 near-synchronous samples and raw spectra.
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples
NASA Astrophysics Data System (ADS)
Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut
1993-03-01
The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.
NASA Astrophysics Data System (ADS)
Yang, Haiqing; Wu, Di; He, Yong
2007-11-01
Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.
Bagchi, Torit Baran; Sharma, Srigopal; Chattopadhyay, Krishnendu
2016-01-15
With the escalating persuasion of economic and nutritional importance of rice grain protein and nutritional components of rice bran (RB), NIRS can be an effective tool for high throughput screening in rice breeding programme. Optimization of NIRS is prerequisite for accurate prediction of grain quality parameters. In the present study, 173 brown rice (BR) and 86 RB samples with a wide range of values were used to compare the calibration models generated by different chemometrics for grain protein (GPC) and amylose content (AC) of BR and proximate compositions (protein, crude oil, moisture, ash and fiber content) of RB. Various modified partial least square (mPLSs) models corresponding with the best mathematical treatments were identified for all components. Another set of 29 genotypes derived from the breeding programme were employed for the external validation of these calibration models. High accuracy of all these calibration and prediction models was ensured through pair t-test and correlation regression analysis between reference and predicted values. Copyright © 2015 Elsevier Ltd. All rights reserved.
Near infrared spectroscopy for prediction of antioxidant compounds in the honey.
Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada
2013-12-15
The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.
Digital TAcy: proof of concept
NASA Astrophysics Data System (ADS)
Bubel, Annie; Sylvain, Jean-François; Martin, François
2009-06-01
Anthocyanins are water soluble pigments in plants that are recognized for their antioxidant property. These pigments are found in high concentration in cranberries, which give their characteristic dark red color. The Total Anthocyanin concentration (TAcy) measurement process requires precious time, consumes chemical products and needs to be continuously repeated during the harvesting period. The idea of the digital TAcy system is to explore the possibility of estimating the TAcy based on analysing the color of the fruits. A calibrated color image capture set-up was developed and characterized, allowing calibrated color data capture from hundreds of samples over two harvesting years (fall of 2007 and 2008). The acquisition system was designed in such a way to avoid specular reflections and provide good resolution images with an extended range of color values representative of the different stages of fruit ripeness. The chemical TAcy value being known for every sample, a mathematical model was developed to predict the TAcy based on color information. This model, which also takes into account bruised and rotten fruits, shows a RMS error of less than 6% over the TAcy interest range [0-50].
Calibration and validation of a general infiltration model
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.
1999-08-01
A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.
NASA Astrophysics Data System (ADS)
Rathmann, SöHnke; Hess, Silvia; Kuhnert, Henning; Mulitza, Stefan
2004-12-01
A laser ablation system connected to an inductively coupled plasma mass spectrometer was used to determine Mg/Ca ratios of the benthic foraminifera Oridorsalis umbonatus. A set of modern core top samples collected along a depth transect on the continental slope off Namibia (320-2300 m water depth; 2.9° to 10.4°C) was used to calibrate the Mg/Ca ratio against bottom water temperature. The resulting Mg/Ca-bottom water temperature relationship of O. umbonatus is described by the exponential equation Mg/Ca = 1.528*e0.09*BWT. The temperature sensitivity of this equation is similar to previously published calibrations based on Cibicidoides species, suggesting that the Mg/Ca ratio of O. umbonatus is a valuable proxy for thermocline and deep water temperature.
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming
Cambra, Carlos; Lacuesta, Raquel
2018-01-01
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation. PMID:29693611
Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming.
Cambra, Carlos; Sendra, Sandra; Lloret, Jaime; Lacuesta, Raquel
2018-04-25
Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation.
40 CFR 1065.690 - Buoyancy correction for PM sample media.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if you weigh them on a balance. The buoyancy correction depends on the sample media density, the density of air, and the density of the calibration weight used to calibrate the balance. The buoyancy...
NASA Astrophysics Data System (ADS)
Dreißigacker, Anne; Köhler, Eberhard; Fabel, Oliver; van Gasselt, Stephan
2014-05-01
At the Planetary Sciences and Remote Sensing research group at Freie Universität Berlin an SCD-based X-Ray Fluorescence Spectrometer is being developed to be employed on planetary orbiters to conduct direct, passive energy-dispersive x-ray fluorescence measurements of planetary surfaces through measuring the emitted X-Ray fluorescence induced by solar x-rays and high energy particles. Because the Sun is a highly variable radiation source, the intensity of solar X-Ray radiation has to be monitored constantly to allow for comparison and signal calibration of X-Ray radiation from lunar surface materials. Measurements are obtained by indirectly monitoring incident solar x-rays emitted from a calibration sample. This has the additional advantage of minimizing the risk of detector overload and damage during extreme solar events such as high-energy solar flares and particle storms as only the sample targets receive the higher radiation load directly (while the monitor is never directly pointing towards the Sun). Quantitative data are being obtained and can be subsequently analysed through synchronous measurement of fluorescence of the Moon's surface by the XRF-S main instrument and the emitted x-ray fluorescence of calibration samples by the XRF-S-ISM (Indirect Solar Monitor). We are currently developing requirements for 3 sample tiles for onboard correction and calibration of XRF-S, each with an area of 3-9 cm2 and a maximum weight of 45 g. This includes development of design concepts, determination of techniques for sample manufacturing, manufacturing and testing of prototypes and statistical analysis of measurement characteristics and quantification of error sources for the advanced prototypes and final samples. Apart from using natural rock samples as calibration sample, we are currently investigating techniques for sample manufacturing including laser sintering of rock-glass on metals, SiO2-stabilized mineral-powders, or artificial volcanic glass. High precision measurements of the chemical composition of the final samples (EPMA, various energy-dispersive XRF) will serve as calibration standard for XRF-S. Development is funded by the German Aerospace Agency under grant 50 JR 1303.
Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN
2010-08-03
A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.
NASA Astrophysics Data System (ADS)
Yu, Jiajia; He, Yong
Mango is a kind of popular tropical fruit, and the soluble solid content is an important in this study visible and short-wave near-infrared spectroscopy (VIS/SWNIR) technique was applied. For sake of investigating the feasibility of using VIS/SWNIR spectroscopy to measure the soluble solid content in mango, and validating the performance of selected sensitive bands, for the calibration set was formed by 135 mango samples, while the remaining 45 mango samples for the prediction set. The combination of partial least squares and backpropagation artificial neural networks (PLS-BP) was used to calculate the prediction model based on raw spectrum data. Based on PLS-BP, the determination coefficient for prediction (Rp) was 0.757 and root mean square and the process is simple and easy to operate. Compared with the Partial least squares (PLS) result, the performance of PLS-BP is better.
Kinch, Kjartan M; Bell, James F; Goetz, Walter; Johnson, Jeffrey R; Joseph, Jonathan; Madsen, Morten Bo; Sohl-Dickstein, Jascha
2015-05-01
The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two-layer scattering model, and we present a dust reflectance spectrum derived from long-term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance-calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history.
Bell, James F.; Goetz, Walter; Johnson, Jeffrey R.; Joseph, Jonathan; Madsen, Morten Bo; Sohl‐Dickstein, Jascha
2015-01-01
Abstract The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two‐layer scattering model, and we present a dust reflectance spectrum derived from long‐term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance‐calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history. PMID:27981072
Occupational Survey Report, Cardiopulmonary Laboratory, AFSC 4H0X1, OSSN: 2541
2004-02-01
patients within facility 97 E0211 Set up humidifiers 97 E0175 Instruct patients in use of incentive spirometers 97 A0031 Obtain sputum samples 97 A0026...D0137 Calibrate pulmonary function testing equipment 100 D0150 Perform routine spirometry tests 100 D0146 Perform lung diffusion tests 100 A0042 Perform...consultations, or procedures 31 D0150 Perform routine spirometry tests 23 35 TABLE A2 REPRESENTATIVE TASKS PERFORMED BY MEMBERS IN THE SUPERVISION AND
Evaluation and Improvement of Earth Radiation Budget Data Sets
NASA Technical Reports Server (NTRS)
Haeffelin, Martial P. A.
2001-01-01
The tasks performed during this grant are as follows: (1) Advanced scan patterns for enhanced spatial and angular sampling of ground targets; (2) Inter-calibration of polar orbiter in low Earth orbits (LEO) and geostationary (GEO) broadband radiance measurements; (3) Synergism between CERES on TRMM and Terra; (4) Improved surface solar irradiance measurements; (5) SW flux observations from Ultra Long Duration Balloons at 35 km altitude; (6) Nighttime cloud property retrieval algorithm; (7) Retrievals of overlapped and mixed-phase clouds.
NASA Astrophysics Data System (ADS)
Talia, M.; Cimatti, A.; Pozzetti, L.; Rodighiero, G.; Gruppioni, C.; Pozzi, F.; Daddi, E.; Maraston, C.; Mignoli, M.; Kurk, J.
2015-10-01
Aims: In this paper we use a well-controlled spectroscopic sample of galaxies at 1
Halogenated Peptides as Internal Standards (H-PINS)
Mirzaei, Hamid; Brusniak, Mi-Youn; Mueller, Lukas N.; Letarte, Simon; Watts, Julian D.; Aebersold, Ruedi
2009-01-01
As the application for quantitative proteomics in the life sciences has grown in recent years, so has the need for more robust and generally applicable methods for quality control and calibration. The reliability of quantitative proteomics is tightly linked to the reproducibility and stability of the analytical platforms, which are typically multicomponent (e.g. sample preparation, multistep separations, and mass spectrometry) with individual components contributing unequally to the overall system reproducibility. Variations in quantitative accuracy are thus inevitable, and quality control and calibration become essential for the assessment of the quality of the analyses themselves. Toward this end, the use of internal standards cannot only assist in the detection and removal of outlier data acquired by an irreproducible system (quality control) but can also be used for detection of changes in instruments for their subsequent performance and calibration. Here we introduce a set of halogenated peptides as internal standards. The peptides are custom designed to have properties suitable for various quality control assessments, data calibration, and normalization processes. The unique isotope distribution of halogenated peptides makes their mass spectral detection easy and unambiguous when spiked into complex peptide mixtures. In addition, they were designed to elute sequentially over an entire aqueous to organic LC gradient and to have m/z values within the commonly scanned mass range (300–1800 Da). In a series of experiments in which these peptides were spiked into an enriched N-glycosite peptide fraction (i.e. from formerly N-glycosylated intact proteins in their deglycosylated form) isolated from human plasma, we show the utility and performance of these halogenated peptides for sample preparation and LC injection quality control as well as for retention time and mass calibration. Further use of the peptides for signal intensity normalization and retention time synchronization for selected reaction monitoring experiments is also demonstrated. PMID:19411281
21 CFR 864.8185 - Calibrator for red cell and white cell counting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Calibrator for red cell and white cell counting... Calibrator for red cell and white cell counting. (a) Identification. A calibrator for red cell and white cell counting is a device that resembles red or white blood cells and that is used to set instruments intended...
Statistical analysis on experimental calibration data for flowmeters in pressure pipes
NASA Astrophysics Data System (ADS)
Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto
2017-08-01
This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .
Zuo, Yamin; Deng, Xuehua; Wu, Qing
2018-05-04
Discrimination of Gastrodia elata ( G. elata ) geographical origin is of great importance to pharmaceutical companies and consumers in China. this paper focuses on the feasibility of near infrared spectrum (NIRS) combined multivariate analysis as a rapid and non-destructive method to prove its fit for this purpose. Firstly, 16 batches of G. elata samples from four main-cultivation regions in China were quantified by traditional HPLC method. It showed that samples from different origins could not be efficiently differentiated by the contents of four phenolic compounds in this study. Secondly, the raw near infrared (NIR) spectra of those samples were acquired and two different pattern recognition techniques were used to classify the geographical origins. The results showed that with spectral transformation optimized, discriminant analysis (DA) provided 97% and 99% correct classification for the calibration and validation sets of samples from discriminating of four different main-cultivation regions, and provided 98% and 99% correct classifications for the calibration and validation sets of samples from eight different cities, respectively, which all performed better than the principal component analysis (PCA) method. Thirdly, as phenolic compounds content (PCC) is highly related with the quality of G. elata , synergy interval partial least squares (Si-PLS) was applied to build the PCC prediction model. The coefficient of determination for prediction (R p ²) of the Si-PLS model was 0.9209, and root mean square error for prediction (RMSEP) was 0.338. The two regions (4800 cm −1 ⁻5200 cm −1 , and 5600 cm −1 ⁻6000 cm −1 ) selected by Si-PLS corresponded to the absorptions of aromatic ring in the basic phenolic structure. It can be concluded that NIR spectroscopy combined with PCA, DA and Si-PLS would be a potential tool to provide a reference for the quality control of G. elata.
METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL
The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...
Towards a global network of gamma-ray detector calibration facilities
NASA Astrophysics Data System (ADS)
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.
Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette
2011-10-01
To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni
2013-10-30
Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms.
del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni
2013-01-01
Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results indicate that automation efficiency is limited by optimum visibility conditions. Data sets from manual counting present the larger day-night fluctuations in comparison to those derived from automation. This comparison indicates that the automation protocol subestimate fish numbers but it is anyway suitable for the study of community activity rhythms. PMID:24177726
Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan
2009-01-01
Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in addition, the Food and Drug Administration (FDA) demands "process understanding" in PAT. Both these issues make OPLS the ideal tool for a multitude of NIR calibrations. In conclusion, OPLS leads to better interpretation of spectrometry data (e.g., NIR) and improved understanding facilitates cross-scientific communication. Such improved knowledge will decrease risk, with respect to both accuracy and precision, when using NIR for PAT applications.
Molar mass characterization of sodium carboxymethyl cellulose by SEC-MALLS.
Shakun, Maryia; Maier, Helena; Heinze, Thomas; Kilz, Peter; Radke, Wolfgang
2013-06-05
Two series of sodium carboxymethyl celluloses (NaCMCs) derived from microcrystalline cellulose (Avicel samples) and cotton linters (BWL samples) with average degrees of substitution (DS) ranging from DS=0.45 to DS=1.55 were characterized by size exclusion chromatography with multi-angle laser light scattering detection (SEC-MALLS) in 100 mmol/L aqueous ammonium acetate (NH4OAc) as vaporizable eluent system. The application of vaporizable NH4OAc allows future use of the eluent system in two-dimensional separations employing evaporative light scattering detection (ELSD). The losses of samples during filtration and during the chromatographic experiment were determined. The scaling exponent as of the relation [Formula: see text] was approx. 0.61, showing that NaCMCs exhibit an expanded coil conformation in solution. No systematic dependencies of as on DS were observed. The dependences of molar mass on SEC-elution volume for samples of different DS can be well described by a common calibration curve, which is of advantage, as it allows the determination of molar masses of unknown samples by using the same calibration curve, irrespective of the DS of the NaCMC sample. Since no commercial NaCMC standards are available, correction factors were determined allowing converting a pullulan based calibration curve into a NaCMC calibration using the broad calibration approach. The weight average molar masses derived using the so established calibration curve closely agree with the ones determined by light scattering, proving the accuracy of the correction factors determined. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bittante, G; Ferragina, A; Cipolat-Gotet, C; Cecchinato, A
2014-10-01
Cheese yield is an important technological trait in the dairy industry. The aim of this study was to infer the genetic parameters of some cheese yield-related traits predicted using Fourier-transform infrared (FTIR) spectral analysis and compare the results with those obtained using an individual model cheese-producing procedure. A total of 1,264 model cheeses were produced using 1,500-mL milk samples collected from individual Brown Swiss cows, and individual measurements were taken for 10 traits: 3 cheese yield traits (fresh curd, curd total solids, and curd water as a percent of the weight of the processed milk), 4 milk nutrient recovery traits (fat, protein, total solids, and energy of the curd as a percent of the same nutrient in the processed milk), and 3 daily cheese production traits per cow (fresh curd, total solids, and water weight of the curd). Each unprocessed milk sample was analyzed using a MilkoScan FT6000 (Foss, Hillerød, Denmark) over the spectral range, from 5,000 to 900 wavenumber × cm(-1). The FTIR spectrum-based prediction models for the previously mentioned traits were developed using modified partial least-square regression. Cross-validation of the whole data set yielded coefficients of determination between the predicted and measured values in cross-validation of 0.65 to 0.95 for all traits, except for the recovery of fat (0.41). A 3-fold external validation was also used, in which the available data were partitioned into 2 subsets: a training set (one-third of the herds) and a testing set (two-thirds). The training set was used to develop calibration equations, whereas the testing subsets were used for external validation of the calibration equations and to estimate the heritabilities and genetic correlations of the measured and FTIR-predicted phenotypes. The coefficients of determination between the predicted and measured values in cross-validation results obtained from the training sets were very similar to those obtained from the whole data set, but the coefficient of determination of validation values for the external validation sets were much lower for all traits (0.30 to 0.73), and particularly for fat recovery (0.05 to 0.18), for the training sets compared with the full data set. For each testing subset, the (co)variance components for the measured and FTIR-predicted phenotypes were estimated using bivariate Bayesian analyses and linear models. The intraherd heritabilities for the predicted traits obtained from our internal cross-validation using the whole data set ranged from 0.085 for daily yield of curd solids to 0.576 for protein recovery, and were similar to those obtained from the measured traits (0.079 to 0.586, respectively). The heritabilities estimated from the testing data set used for external validation were more variable but similar (on average) to the corresponding values obtained from the whole data set. Moreover, the genetic correlations between the predicted and measured traits were high in general (0.791 to 0.996), and they were always higher than the corresponding phenotypic correlations (0.383 to 0.995), especially for the external validation subset. In conclusion, we herein report that application of the cross-validation technique to the whole data set tended to overestimate the predictive ability of FTIR spectra, give more precise phenotypic predictions than the calibrations obtained using smaller data sets, and yield genetic correlations similar to those obtained from the measured traits. Collectively, our findings indicate that FTIR predictions have the potential to be used as indicator traits for the rapid and inexpensive selection of dairy populations for improvement of cheese yield, milk nutrient recovery in curd, and daily cheese production per cow. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassamakov, Ivan; Maconi, Göran; Penttilä, Antti; Helander, Petteri; Gritsevich, Maria; Puranen, Tuomas; Salmi, Ari; Hæggström, Edward; Muinonen, Karri
2018-02-01
We present the design of a novel scatterometer for precise measurement of the angular Mueller matrix profile of a mm- to µm-sized sample held in place by sound. The scatterometer comprises a tunable multimode Argon-krypton laser (with possibility to set 1 of the 12 wavelengths in visible range), linear polarizers, a reference photomultiplier tube (PMT) for monitoring the beam intensity, and a micro-PMT module mounted radially towards the sample at an adjustable radius. The measurement angle is controlled by a motor-driven rotation stage with an accuracy of 15'. The system is fully automated using LabVIEW, including the FPGA-based data acquisition and the instrument's user interface. The calibration protocol ensures accurate measurements by using a control sphere sample (diameter 3 mm, refractive index of 1.5) fixed first on a static holder followed by accurate multi-wavelength measurements of the same sample levitated ultrasonically. To demonstrate performance of the scatterometer, we conducted detailed measurements of light scattered by a particle derived from the Chelyabinsk meteorite, as well as planetary analogue materials. The measurements are the first of this kind, since they are obtained using controlled spectral angular scattering including linear polarization effects, for arbitrary shaped objects. Thus, our novel approach permits a non-destructive, disturbance-free measurement with control of the orientation and location of the scattering object.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
[Determination of Carbaryl in Rice by Using FT Far-IR and THz-TDS Techniques].
Sun, Tong; Zhang, Zhuo-yong; Xiang, Yu-hong; Zhu, Ruo-hua
2016-02-01
Determination of carbaryl in rice by using Fourier transform far-infrared (FT- Far-IR) and terahertz time-domain spectroscopy (THz-TDS) combined with chemometrics was studied and the spectral characteristics of carbaryl in terahertz region was investigated. Samples were prepared by mixing carbaryl at different amounts with rice powder, and then a 13 mm diameter, and about 1 mm thick pellet with polyethylene (PE) as matrix was compressed under the pressure of 5-7 tons. Terahertz time domain spectra of the pellets were measured at 0.5~1.5 THz, and the absorption spectra at 1.6. 3 THz were acquired with Fourier transform far-IR spectroscopy. The method of sample preparation is so simple that it does not need separation and enrichment. The absorption peaks in the frequency range of 1.8-6.3 THz have been found at 3.2 and 5.2 THz by Far-IR. There are several weak absorption peaks in the range of 0.5-1.5 THz by THz-TDS. These two kinds of characteristic absorption spectra were randomly divided into calibration set and prediction set by leave-N-out cross-validation, respectively. Finally, the partial least squares regression (PLSR) method was used to establish two quantitative analysis models. The root mean square error (RMSECV), the root mean square errors of prediction (RMSEP) and the correlation coefficient of the prediction are used as a basis for the model of performance evaluation. For the R,, a higher value is better; for the RMSEC and RMSEP, lower is better. The obtained results demonstrated that the predictive accuracy of. the two models with PLSR method were satisfactory. For the FT-Far-IR model, the correlation between actual and predicted values of prediction samples (Rv) was 0.99. The root mean square error of prediction set (RMSEP) was 0.008 6, and for calibration set (RMSECV) was 0.007 7. For the THz-TDS model, R. was 0. 98, RMSEP was 0.004 4, and RMSECV was 0.002 5. Results proved that the technology of FT-Far-IR and THz- TDS can be a feasible tool for quantitative determination of carbaryl in rice. This paper provides a new method for the quantitative determination pesticide in other grain samples.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
The Mars Science Laboratory Organic Check Material
NASA Astrophysics Data System (ADS)
Conrad, Pamela G.; Eigenbrode, Jennifer L.; Von der Heydt, Max O.; Mogensen, Claus T.; Canham, John; Harpold, Dan N.; Johnson, Joel; Errigo, Therese; Glavin, Daniel P.; Mahaffy, Paul R.
2012-09-01
Mars Science Laboratory's Curiosity rover carries a set of five external verification standards in hermetically sealed containers that can be sampled as would be a Martian rock, by drilling and then portioning into the solid sample inlet of the Sample Analysis at Mars (SAM) suite. Each organic check material (OCM) canister contains a porous ceramic solid, which has been doped with a fluorinated hydrocarbon marker that can be detected by SAM. The purpose of the OCM is to serve as a verification tool for the organic cleanliness of those parts of the sample chain that cannot be cleaned other than by dilution, i.e., repeated sampling of Martian rock. SAM possesses internal calibrants for verification of both its performance and its internal cleanliness, and the OCM is not used for that purpose. Each OCM unit is designed for one use only, and the choice to do so will be made by the project science group (PSG).
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
Development of a Low-Level Ar-37 Calibration Standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Richard M.; Aalseth, Craig E.; Bowyer, Ted W.
Argon-37 is an important environmental signature of an underground nuclear explosion. Producing and quantifying low-level 37Ar standards is an important step in the development of sensitive field measurement instruments for use during an On-Site Inspection, a key provision of the Comprehensive Nuclear-Test-Ban Treaty. This paper describes progress at Pacific Northwest National Laboratory (PNNL) in the development of a process to generate and quantify low-level 37Ar standard material, which can then be used to calibrate sensitive field systems at activities consistent with soil background levels. The 37Ar used for our work was generated using a laboratory-scale, high-energy neutron source to irradiatemore » powdered samples of calcium carbonate. Small aliquots of 37Ar were then extracted from the head space of the irradiated samples. The specific activity of the head space samples, mixed with P10 (90% stable argon:10% methane by mole fraction) count gas, is then derived using the accepted Length-Compensated Internal-Source Proportional Counting method. Due to the low activity of the samples, a set of three Ultra-Low Background Proportional-Counters designed and fabricated at PNNL from radio-pure electroformed copper was used to make the measurements in PNNL’s shallow underground counting laboratory. Very low background levels (<10 counts/day) have been observed in the spectral region near the 37Ar emission feature at 2.8 keV. Two separate samples from the same irradiation were measured. The first sample was counted for 12 days beginning 28 days after irradiation, the second sample was counted for 24 days beginning 70 days after irradiation (the half-life of 37Ar is 35.0 days). Both sets of measurements were analyzed and yielded very similar results for the starting activity (~0.1 Bq) and activity concentration (0.15 mBq/ccSTP argon) after P10 count gas was added. A detailed uncertainty model was developed based on the ISO Guide to the Expression of Uncertainty in Measurement. This paper presents a discussion of the measurement analysis, along with assumptions and uncertainty estimates.« less
Construction and calibration of a low cost and fully automated vibrating sample magnetometer
NASA Astrophysics Data System (ADS)
El-Alaily, T. M.; El-Nimr, M. K.; Saafan, S. A.; Kamel, M. M.; Meaz, T. M.; Assar, S. T.
2015-07-01
A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability.
In-Space Calibration of a Gyro Quadruplet
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This work presents a new approach to gyro calibration where, in addition to being used for computing attitude that is needed in the calibration process, the gyro outputs are also used as measurements in a Kalman filter. This work also presents an algorithm for calibrating a quadruplet rather than the customary triad gyro set. In particular, a new misalignment error model is derived for this case. The new calibration algorithm is applied to the EOS-AQUA satellite gyros. The effectiveness of the new algorithm is demonstrated through simulations.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD
NASA Astrophysics Data System (ADS)
Giudicotti, L.; Pasqualotto, R.; Fassina, A.
2014-11-01
In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature Te, the electron density ne and the relative calibration coefficients of spectral channels sensitivity Ci were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual-angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovard R. Perry; David L. Georgeson
This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for high energy lung counting. The source used for the calibration was a NIST traceable lung set manufactured at the University of Cincinnati UCLL43AMEU & UCSL43AMEU containing Am-241 and Eu-152 with energies from 26 keV to 1408 keV. The lung set was used in conjunction with a Realistic Torso phantom. The phantom was placed on the RMC II counting table (with pins removed) between the v-ridges on the backwall of the Accuscan II counter. The top of the detector housing was positioned perpendicular to themore » junction of the phantom clavicle with the sternum. This position places the approximate center line of the detector housing with the center of the lungs. The energy and efficiency calibrations were performed using a Realistic Torso phantom (Appendix I) and the University of Cincinnati lung set. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for high energy lung counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Dario; Verma, Sumit; Pafeng, Josiane
We present a reservoir geophysics study, including rock physics modeling and seismic inversion, of a carbon dioxide sequestration site in Southwestern Wyoming, namely the Rock Springs Uplift, and build a petrophysical model for the potential injection reservoirs for carbon dioxide sequestration. Our objectives include the facies classification and the estimation of the spatial model of porosity and permeability for two sequestration targets of interest, the Madison Limestone and the Weber Sandstone. The available dataset includes a complete set of well logs at the location of the borehole available in the area, a set of 110 core samples, and a seismicmore » survey acquired in the area around the well. The proposed study includes a formation evaluation analysis and facies classification at the well location, the calibration of a rock physics model to link petrophysical properties and elastic attributes using well log data and core samples, the elastic inversion of the pre-stack seismic data, and the estimation of the reservoir model of facies, porosity and permeability conditioned by seismic inverted elastic attributes and well log data. In particular, the rock physics relations are facies-dependent and include granular media equations for clean and shaley sandstone, and inclusion models for the dolomitized limestone. The permeability model has been computed by applying a facies-dependent porosity-permeability relation calibrated using core sample measurements. Finally, the study shows that both formations show good storage capabilities. The Madison Limestone includes a homogeneous layer of high-porosity high-permeability dolomite; the Weber Sandstone is characterized by a lower average porosity but the layer is thicker than the Madison Limestone.« less
Grana, Dario; Verma, Sumit; Pafeng, Josiane; ...
2017-06-20
We present a reservoir geophysics study, including rock physics modeling and seismic inversion, of a carbon dioxide sequestration site in Southwestern Wyoming, namely the Rock Springs Uplift, and build a petrophysical model for the potential injection reservoirs for carbon dioxide sequestration. Our objectives include the facies classification and the estimation of the spatial model of porosity and permeability for two sequestration targets of interest, the Madison Limestone and the Weber Sandstone. The available dataset includes a complete set of well logs at the location of the borehole available in the area, a set of 110 core samples, and a seismicmore » survey acquired in the area around the well. The proposed study includes a formation evaluation analysis and facies classification at the well location, the calibration of a rock physics model to link petrophysical properties and elastic attributes using well log data and core samples, the elastic inversion of the pre-stack seismic data, and the estimation of the reservoir model of facies, porosity and permeability conditioned by seismic inverted elastic attributes and well log data. In particular, the rock physics relations are facies-dependent and include granular media equations for clean and shaley sandstone, and inclusion models for the dolomitized limestone. The permeability model has been computed by applying a facies-dependent porosity-permeability relation calibrated using core sample measurements. Finally, the study shows that both formations show good storage capabilities. The Madison Limestone includes a homogeneous layer of high-porosity high-permeability dolomite; the Weber Sandstone is characterized by a lower average porosity but the layer is thicker than the Madison Limestone.« less
A new time calibration method for switched-capacitor-array-based waveform samplers
NASA Astrophysics Data System (ADS)
Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.
2014-12-01
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.
Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M
2014-12-11
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers
Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.
2014-01-01
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113
Design and calibration of a scanning tunneling microscope for large machined surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigg, D.A.; Russell, P.E.; Dow, T.A.
During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.
NASA Astrophysics Data System (ADS)
Dhara, Sangita; Misra, N. L.; Aggarwal, S. K.; Venugopal, V.
2010-06-01
An energy dispersive X-ray fluorescence method for determination of cadmium (Cd) in uranium (U) matrix using continuum source of excitation was developed. Calibration and sample solutions of cadmium, with and without uranium were prepared by mixing different volumes of standard solutions of cadmium and uranyl nitrate, both prepared in suprapure nitric acid. The concentration of Cd in calibration solutions and samples was in the range of 6 to 90 µg/mL whereas the concentration of Cd with respect to U ranged from 90 to 700 µg/g of U. From the calibration solutions and samples containing uranium, the major matrix uranium was selectively extracted using 30% tri-n-butyl phosphate in dodecane. Fixed volumes (1.5 mL) of aqueous phases thus obtained were taken directly in specially designed in-house fabricated leak proof Perspex sample cells for the energy dispersive X-ray fluorescence measurements and calibration plots were made by plotting Cd Kα intensity against respective Cd concentration. For the calibration solutions not having uranium, the energy dispersive X-ray fluorescence spectra were measured without any extraction and Cd calibration plots were made accordingly. The results obtained showed a precision of 2% (1 σ) and the results deviated from the expected values by < 4% on average.
Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5–100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today’s extensive occurrence of WB. PMID:28278170
Zhang, Hong-yan; Ding, Dong; Song, Li-qiang; Gu, Lin-na; Yang, Peng; Tang, Yu-guo
2005-06-01
The noninvasive measurement of human blood glucose was achieved with NIR diffusion reflectance spectrum method. The thumb fingertip NIR diffusion reflectance spectra of six different age healthy volunteers were collected using Nexus-870 and its NIR fiber port smart accessory. The test was implemented with changing the blood glucose concentration for the limosis and satiation of every volunteer. The calibration model was set up using PLS method with the smoothing, baseline correction and first derivatives pretreatment spectrum in the 7500-8500 cm(-1) region for single volunteer, the same age combination and that of different age. When the spectrum was obtained, the actual blood glucose value of every spectrun sample was demarcated using ultraviolet spectrophotometer. The correlation between the calibration value and true value for single volunteer is better than that for the combination of volunteers, the correlative coefficients are all over 0.90471, RMSECs are all less than 0.171.
NASA Technical Reports Server (NTRS)
Hook, Simon J.
1995-01-01
A lightweight, rugged, high-spectral-resolution interferometer has been built by Designs and Prototypes based on a set of specifications provided by the Jet Propulsion Laboratory and Dr. J. W. Salisbury (Johns Hopkins University). The instrument, the micro Fourier Transform Interferometer (mFTIR), permits the acquisition of infrared spectra of natural surfaces. Such data can be used to validate low and high spectral resolution data acquired remotely from aircraft and spacecraft in the 3-5 mm and 8-14 mm atmospheric window. The instrument has a spectral resolutions of 6 wavenumbers, weighs 16 kg including batteries and computer, and can be operated easily by two people in the field. Laboratory analysis indicates the instrument is spectrally calibrated to better than 1 wavenumber and the radiometric accuracy is <0.5 K if the radiances from the blackbodies used for calibration bracket the radiance from the sample.
Calibrating excitation light fluxes for quantitative light microscopy in cell biology
Grünwald, David; Shenoy, Shailesh M; Burke, Sean; Singer, Robert H
2011-01-01
Power output of light bulbs changes over time and the total energy delivered will depend on the optical beam path of the microscope, filter sets and objectives used, thus making comparison between experiments performed on different microscopes complicated. Using a thermocoupled power meter, it is possible to measure the exact amount of light applied to a specimen in fluorescence microscopy, regardless of the light source, as the light power measured can be translated into a power density at the sample. This widely used and simple tool forms the basis of a new degree of calibration precision and comparability of results among experiments and setups. Here we describe an easy-to-follow protocol that allows researchers to precisely estimate excitation intensities in the object plane, using commercially available opto-mechanical components. The total duration of this protocol for one objective and six filter cubes is 75 min including start-up time for the lamp. PMID:18974739
Simultaneous determination of three herbicides by differential pulse voltammetry and chemometrics.
Ni, Yongnian; Wang, Lin; Kokot, Serge
2011-01-01
A novel differential pulse voltammetry method (DPV) was researched and developed for the simultaneous determination of Pendimethalin, Dinoseb and sodium 5-nitroguaiacolate (5NG) with the aid of chemometrics. The voltammograms of these three compounds overlapped significantly, and to facilitate the simultaneous determination of the three analytes, chemometrics methods were applied. These included classical least squares (CLS), principal component regression (PCR), partial least squares (PLS) and radial basis function-artificial neural networks (RBF-ANN). A separately prepared verification data set was used to confirm the calibrations, which were built from the original and first derivative data matrices of the voltammograms. On the basis relative prediction errors and recoveries of the analytes, the RBF-ANN and the DPLS (D - first derivative spectra) models performed best and are particularly recommended for application. The DPLS calibration model was applied satisfactorily for the prediction of the three analytes from market vegetables and lake water samples.
The benefit of a tough skin: bullet holes, weathering and the preservation of heritage
Gomez-Heras, M.; Brassey, C.; Green, O.; Blenkinsop, T.
2017-01-01
Projectile damage to building stone is a widespread phenomenon. Sites damaged 100 years ago during the First World War still see daily use, while in a more contemporary setting numerous reports show the damage to buildings in Babylon, Mosul and Palmyra. While research has been carried out on the long-term effects of conflict such as fire damage, little is known about the protracted damage sustained through the impact of bullets, shrapnel and other metal projectiles outside of the field of engineering focused on ceramics and metals. To investigate alterations to mineral structure caused by projectile damage, impacts were created in medium-grained, well-compacted, mesoporous sandstone samples using 0.22 calibre lead bullets shot at a distance of 20 m. Half these samples were treated with a surface consolidant (Wacker OH 100), to mimic natural cementation of the rock surface. These samples were then tested for changes to surface hardness and moisture movement during temperature cycles of 15–65°C. Petrographic thin section analysis was carried out to investigate the micro-scale deformation associated with high-speed impact. The results surprisingly show that stress build-up behind pre-existing cementation of the surface, as found in heritage sites that have been exposed to moisture and temperature fluctuations for longer periods of time, can be alleviated with a bullet impact. However, fracture networks and alteration of the mineral matrices still form a weak point within the structure, even at a relatively low impact calibre. This initial study illustrates the need for geomorphologists, geologists, engineers and heritage specialists to work collectively to gain further insights into the long-term impact of higher calibre armed warfare on heritage deterioration. PMID:28386411
Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine
2017-03-01
Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.
The benefit of a tough skin: bullet holes, weathering and the preservation of heritage
NASA Astrophysics Data System (ADS)
Mol, Lisa; Gomez-Heras, M.; Brassey, C.; Green, O.; Blenkinsop, T.
2017-02-01
Projectile damage to building stone is a widespread phenomenon. Sites damaged 100 years ago during the First World War still see daily use, while in a more contemporary setting numerous reports show the damage to buildings in Babylon, Mosul and Palmyra. While research has been carried out on the long-term effects of conflict such as fire damage, little is known about the protracted damage sustained through the impact of bullets, shrapnel and other metal projectiles outside of the field of engineering focused on ceramics and metals. To investigate alterations to mineral structure caused by projectile damage, impacts were created in medium-grained, well-compacted, mesoporous sandstone samples using 0.22 calibre lead bullets shot at a distance of 20 m. Half these samples were treated with a surface consolidant (Wacker OH 100), to mimic natural cementation of the rock surface. These samples were then tested for changes to surface hardness and moisture movement during temperature cycles of 15-65°C. Petrographic thin section analysis was carried out to investigate the micro-scale deformation associated with high-speed impact. The results surprisingly show that stress build-up behind pre-existing cementation of the surface, as found in heritage sites that have been exposed to moisture and temperature fluctuations for longer periods of time, can be alleviated with a bullet impact. However, fracture networks and alteration of the mineral matrices still form a weak point within the structure, even at a relatively low impact calibre. This initial study illustrates the need for geomorphologists, geologists, engineers and heritage specialists to work collectively to gain further insights into the long-term impact of higher calibre armed warfare on heritage deterioration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addair, Travis; Barno, Justin; Dodge, Doug
CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.
Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Oliveri, Paolo; Casolino, Maria Chiara; Jalali-Heravi, Mehdi; Lanteri, Silvia
2012-09-15
The current study presents an application of near infrared spectroscopy for identification and quantification of the fraudulent addition of barley in roasted and ground coffee samples. Nine different types of coffee including pure Arabica, Robusta and mixtures of them at different roasting degrees were blended with four types of barley. The blending degrees were between 2 and 20 wt% of barley. D-optimal design was applied to select 100 and 30 experiments to be used as calibration and test set, respectively. Partial least squares regression (PLS) was employed to build the models aimed at predicting the amounts of barley in coffee samples. In order to obtain simplified models, taking into account only informative regions of the spectral profiles, a genetic algorithm (GA) was applied. A completely independent external set was also used to test the model performances. The models showed excellent predictive ability with root mean square errors (RMSE) for the test and external set equal to 1.4% w/w and 0.8% w/w, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system
NASA Astrophysics Data System (ADS)
Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng
2009-02-01
This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336
Zhang, Bing-Fang; Yuan, Li-Bo; Kong, Qing-Ming; Shen, Wei-Zheng; Zhang, Bing-Xiu; Liu, Cheng-Hai
2014-10-01
In the present study, a new method using near infrared spectroscopy combined with optical fiber sensing technology was applied to the analysis of hogwash oil in blended oil. The 50 samples were a blend of frying oil and "nine three" soybean oil according to a certain volume ratio. The near infrared transmission spectroscopies were collected and the quantitative analysis model of frying oil was established by partial least squares (PLS) and BP artificial neural network The coefficients of determina- tion of calibration sets were 0.908 and 0.934 respectively. The coefficients of determination of validation sets were 0.961 and 0.952, the root mean square error of calibrations (RMSEC) was 0.184 and 0.136, and the root mean square error of predictions (RMSEP) was all 0.111 6. They conform to the model application requirement. At the same time, frying oil and qualified edible oil were identified with the principal component analysis (PCA), and the accurate rate was 100%. The experiment proved that near infrared spectral technology not only can quickly and accurately identify hogwash oil, but also can quantitatively detect hog- wash oil. This method has a wide application prospect in the detection of oil.
Estimation of water quality by UV/Vis spectrometry in the framework of treated wastewater reuse.
Carré, Erwan; Pérot, Jean; Jauzein, Vincent; Lin, Liming; Lopez-Ferber, Miguel
2017-07-01
The aim of this study is to investigate the potential of ultraviolet/visible (UV/Vis) spectrometry as a complementary method for routine monitoring of reclaimed water production. Robustness of the models and compliance of their sensitivity with current quality limits are investigated. The following indicators are studied: total suspended solids (TSS), turbidity, chemical oxygen demand (COD) and nitrate. Partial least squares regression (PLSR) is used to find linear correlations between absorbances and indicators of interest. Artificial samples are made by simulating a sludge leak on the wastewater treatment plant and added to the original dataset, then divided into calibration and prediction datasets. The models are built on the calibration set, and then tested on the prediction set. The best models are developed with: PLSR for COD (R pred 2 = 0.80), TSS (R pred 2 = 0.86) and turbidity (R pred 2 = 0.96), and with a simple linear regression from absorbance at 208 nm (R pred 2 = 0.95) for nitrate concentration. The input of artificial data significantly enhances the robustness of the models. The sensitivity of the UV/Vis spectrometry monitoring system developed is compatible with quality requirements of reclaimed water production processes.
On the Long-Term Calibration of the TOMS Total Ozone Record
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; McPeters, Richard; Labow, Gordon J.; Hollandsworth, Stacey; Flynn, Larry; Einaudi, Franco (Technical Monitor)
2000-01-01
Comparison of Total Ozone Mapping Spectrometer (TOMS) data to the network of ground-based Dobson/Brewer measurements reveals difference in the time dependence of the calibration of the two systems. We have been searching for a method to determine the time dependence of the TOMS calibrations that is independent of the Dobson/Brewer network. In a separate paper by DeLand et al., calibrations of the Solar Backscatter UV Spectrometer (SBUV) instruments have been rederived using the D-pair (306/313 nm wavelengths) data at the equator. These calibrations have been applied to the data from the Nimbus 7 SBUV and the NOAA 9 and 11 SBUV/2 data to derive a new version 7 data set for each instrument. We have used these data to do a detailed comparison to the Nimbus 7 and Earth Probe TOMS data. Assuming that the D-pair establishes the correct calibration, these comparisons reveal some small calibration drifts (approximately 1%) in the TOMS data. They also reveal an offset in the D-pair calibration with respect to the Dobson network of approximately 8 Dobson units with the Dobson being lower than the D-pair. The D-pair calibration offsets have been used to create a merged ozone data set from TOMS with a calibration that has been determined independent of the Dobson/Brewer network. Trend analyses of these data will be presented and compared to trend analyses using the ground-based data.
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
Timothy J. Brady; Vicente J. Monleon; Andrew N. Gray
2010-01-01
We propose using future vascular plant abundances as indicators of future climate in a way analogous to the reconstruction of past environments by many palaeoecologists. To begin monitoring future short-term climate changes in the forests of Oregon and Washington, USA, we developed a set of transfer functions for a present-day calibration set consisting of climate...
Comparison of halocarbon measurements in an atmospheric dry whole air sample.
Rhoderick, George C; Hall, Bradley D; Harth, Christina M; Kim, Jin Seog; Lee, Jeongsoon; Montzka, Stephen A; Mühle, Jens; Reimann, Stefan; Vollmer, Martin K; Weiss, Ray F
The growing awareness of climate change/global warming, and continuing concerns regarding stratospheric ozone depletion, will require continued measurements and standards for many compounds, in particular halocarbons that are linked to these issues. In order to track atmospheric mole fractions and assess the impact of policy on emission rates, it is necessary to demonstrate measurement equivalence at the highest levels of accuracy for assigned values of standards. Precise measurements of these species aid in determining small changes in their atmospheric abundance. A common source of standards/scales and/or well-documented agreement of different scales used to calibrate the measurement instrumentation are key to understanding many sets of data reported by researchers. This report describes the results of a comparison study among National Metrology Institutes and atmospheric research laboratories for the chlorofluorocarbons (CFCs) dichlorodifluoromethane (CFC-12), trichlorofluoromethane (CFC-11), and 1,1,2-trichlorotrifluoroethane (CFC-113); the hydrochlorofluorocarbons (HCFCs) chlorodifluoromethane (HCFC-22) and 1-chloro-1,1-difluoroethane (HCFC-142b); and the hydrofluorocarbon (HFC) 1,1,1,2-tetrafluoroethane (HFC-134a), all in a dried whole air sample. The objective of this study is to compare calibration standards/scales and the measurement capabilities of the participants for these halocarbons at trace atmospheric levels. The results of this study show agreement among four independent calibration scales to better than 2.5% in almost all cases, with many of the reported agreements being better than 1.0%.
Comparison of halocarbon measurements in an atmospheric dry whole air sample
Hall, Bradley D.; Harth, Christina M.; Kim, Jin Seog; Lee, Jeongsoon; Montzka, Stephen A.; Mühle, Jens; Reimann, Stefan; Vollmer, Martin K.; Weiss, Ray F.
2015-01-01
The growing awareness of climate change/global warming, and continuing concerns regarding stratospheric ozone depletion, will require continued measurements and standards for many compounds, in particular halocarbons that are linked to these issues. In order to track atmospheric mole fractions and assess the impact of policy on emission rates, it is necessary to demonstrate measurement equivalence at the highest levels of accuracy for assigned values of standards. Precise measurements of these species aid in determining small changes in their atmospheric abundance. A common source of standards/scales and/or well-documented agreement of different scales used to calibrate the measurement instrumentation are key to understanding many sets of data reported by researchers. This report describes the results of a comparison study among National Metrology Institutes and atmospheric research laboratories for the chlorofluorocarbons (CFCs) dichlorodifluoromethane (CFC-12), trichlorofluoromethane (CFC-11), and 1,1,2-trichlorotrifluoroethane (CFC-113); the hydrochlorofluorocarbons (HCFCs) chlorodifluoromethane (HCFC-22) and 1-chloro-1,1-difluoroethane (HCFC-142b); and the hydrofluorocarbon (HFC) 1,1,1,2-tetrafluoroethane (HFC-134a), all in a dried whole air sample. The objective of this study is to compare calibration standards/scales and the measurement capabilities of the participants for these halocarbons at trace atmospheric levels. The results of this study show agreement among four independent calibration scales to better than 2.5% in almost all cases, with many of the reported agreements being better than 1.0%. PMID:26753167
Phylogeny and temporal diversification of darters (Percidae: Etheostomatinae).
Near, Thomas J; Bossu, Christen M; Bradburd, Gideon S; Carlson, Rose L; Harrington, Richard C; Hollingsworth, Phillip R; Keck, Benjamin P; Etnier, David A
2011-10-01
Discussions aimed at resolution of the Tree of Life are most often focused on the interrelationships of major organismal lineages. In this study, we focus on the resolution of some of the most apical branches in the Tree of Life through exploration of the phylogenetic relationships of darters, a species-rich clade of North American freshwater fishes. With a near-complete taxon sampling of close to 250 species, we aim to investigate strategies for efficient multilocus data sampling and the estimation of divergence times using relaxed-clock methods when a clade lacks a fossil record. Our phylogenetic data set comprises a single mitochondrial DNA (mtDNA) gene and two nuclear genes sampled from 245 of the 248 darter species. This dense sampling allows us to determine if a modest amount of nuclear DNA sequence data can resolve relationships among closely related animal species. Darters lack a fossil record to provide age calibration priors in relaxed-clock analyses. Therefore, we use a near-complete species-sampled phylogeny of the perciform clade Centrarchidae, which has a rich fossil record, to assess two distinct strategies of external calibration in relaxed-clock divergence time estimates of darters: using ages inferred from the fossil record and molecular evolutionary rate estimates. Comparison of Bayesian phylogenies inferred from mtDNA and nuclear genes reveals that heterospecific mtDNA is present in approximately 12.5% of all darter species. We identify three patterns of mtDNA introgression in darters: proximal mtDNA transfer, which involves the transfer of mtDNA among extant and sympatric darter species, indeterminate introgression, which involves the transfer of mtDNA from a lineage that cannot be confidently identified because the introgressed haplotypes are not clearly referable to mtDNA haplotypes in any recognized species, and deep introgression, which is characterized by species diversification within a recipient clade subsequent to the transfer of heterospecific mtDNA. The results of our analyses indicate that DNA sequences sampled from single-copy nuclear genes can provide appreciable phylogenetic resolution for closely related animal species. A well-resolved near-complete species-sampled phylogeny of darters was estimated with Bayesian methods using a concatenated mtDNA and nuclear gene data set with all identified heterospecific mtDNA haplotypes treated as missing data. The relaxed-clock analyses resulted in very similar posterior age estimates across the three sampled genes and methods of calibration and therefore offer a viable strategy for estimating divergence times for clades that lack a fossil record. In addition, an informative rank-free clade-based classification of darters that preserves the rich history of nomenclature in the group and provides formal taxonomic communication of darter clades was constructed using the mtDNA and nuclear gene phylogeny. On the whole, the appeal of mtDNA for phylogeny inference among closely related animal species is diminished by the observations of extensive mtDNA introgression and by finding appreciable phylogenetic signal in a modest sampling of nuclear genes in our phylogenetic analyses of darters.
Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D
2006-08-01
A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.
Calibration of CryojetHT and Cobra Plus Cryosystems used in X-ray diffraction studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudka, A. P., E-mail: dudka@crys.ras.ru; Verin, I. A.; Smirnova, E. S.
CryoJetHT (Oxford Instruments) and Cobra Plus (Oxford Cryosystems) cryosystems, which are used for sample cooling in X-ray diffraction experiments, have been calibrated. It is shown that the real temperature in the vicinity of the sample differs significantly (the deviation is as high as 8–10 K at low temperatures) from the temperature recorded by authorized sensors of these systems. The calibration results are confirmed by measurements of the unit-cell parameters of GdFe{sub 3}(BO{sub 3}){sub 4} single crystal in the temperature range of its phase transition. It is shown that, to determine the real temperature of a sample, one must perform anmore » independent calibration of cryosystems rather than rely on their ratings.« less
Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.
2007-01-01
A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.
NASA Astrophysics Data System (ADS)
Bohlin, R. C.; Gordon, K. D.; Rieke, G. H.; Ardila, D.; Carey, S.; Deustua, S.; Engelbracht, C.; Ferguson, H. C.; Flanagan, K.; Kalirai, J.; Meixner, M.; Noriega-Crespo, A.; Su, K. Y. L.; Tremblay, P.-E.
2011-05-01
The absolute flux calibration of the James Webb Space Telescope (JWST) will be based on a set of stars observed by the Hubble and Spitzer Space Telescopes. In order to cross-calibrate the two facilities, several A, G, and white dwarf stars are observed with both Spitzer and Hubble and are the prototypes for a set of JWST calibration standards. The flux calibration constants for the four Spitzer IRAC bands 1-4 are derived from these stars and are 2.3%, 1.9%, 2.0%, and 0.5% lower than the official cold-mission IRAC calibration of Reach et al., i.e., in agreement within their estimated errors of ~2%. The causes of these differences lie primarily in the IRAC data reduction and secondarily in the spectral energy distributions of our standard stars. The independent IRAC 8 μm band-4 fluxes of Rieke et al. are about 1.5% ± 2% higher than those of Reach et al. and are also in agreement with our 8 μm result.
Teng, Wei-Zhuo; Song, Jia; Meng, Fan-Xin; Meng, Qing-Fan; Lu, Jia-Hui; Hu, Shuang; Teng, Li-Rong; Wang, Di; Xie, Jing
2014-10-01
Partial least squares (PLS) and radial basis function neural network (RBFNN) combined with near infrared spectros- copy (NIR) were applied to develop models for cordycepic acid, polysaccharide and adenosine analysis in Paecilomyces hepialid fermentation mycelium. The developed models possess well generalization and predictive ability which can be applied for crude drugs and related productions determination. During the experiment, 214 Paecilomyces hepialid mycelium samples were obtained via chemical mutagenesis combined with submerged fermentation. The contents of cordycepic acid, polysaccharide and adenosine were determined via traditional methods and the near infrared spectroscopy data were collected. The outliers were removed and the numbers of calibration set were confirmed via Monte Carlo partial least square (MCPLS) method. Based on the values of degree of approach (Da), both moving window partial least squares (MWPLS) and moving window radial basis function neural network (MWRBFNN) were applied to optimize characteristic wavelength variables, optimum preprocessing methods and other important variables in the models. After comparison, the RBFNN, RBFNN and PLS models were developed successfully for cordycepic acid, polysaccharide and adenosine detection, and the correlation between reference values and predictive values in both calibration set (R2c) and validation set (R2p) of optimum models was 0.9417 and 0.9663, 0.9803 and 0.9850, and 0.9761 and 0.9728, respectively. All the data suggest that these models possess well fitness and predictive ability.
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
Walker, J W; Campbell, E S; Lupton, C J; Taylor, C A; Waldron, D F; Landau, S Y
2007-02-01
The effects of breed, sex, and age of goats on fecal near-infrared reflectance spectroscopy-predicted percentage juniper in the diet were investigated, as were spectral differences in feces from goats differing in estimated genetic merit for juniper consumption. Eleven goats from each breed, sex, and age combination, representing 2 breeds (Angora and meat-type), 3 sex classifications (female, intact male, and castrated male), and 2 age categories [adult and kid (less than 12 mo of age)] were fed complete, pelleted rations containing 0 or 14% juniper. After 7 d on the same diet, fecal samples were collected for 3 d, and the spectra from the 3 replicate samples were averaged. Fecal samples were assigned to calibration or validation data sets. In a second experiment, Angora and meat goats with high or low estimated genetic merit for juniper consumption were fed the same diet to determine the effect of consumer group on fecal spectra. Feces were scanned in the 1,100- to 2,500-nm range with a scanning reflectance monochromator. Fecal spectra were analyzed for the difference in spectral characteristics and for differences in predicted juniper in the diet using internal and independent calibration equations. Internal calibration had a high precision (R(2) = 0.94), but the precision of independent validations (r(2) = 0.56) was low. Spectral differences were affected by diet, sex, breed, and age (P < 0.04). However, diet was the largest source of variation in spectral differences. Predicted percentage of juniper in the diet also showed that diet was the largest source of variation, accounting for 95% of the variation in predictions from internal calibrations and 51% of the variation in independent validations. Predictions from independent calibrations readily detected differences (P < 0.001) in the percentage of juniper in the 2 diets, and the predicted differences were similar to the actual differences. Predicted juniper in the diet was also affected by sex. Feces from goats from different juniper consumer groups fed a common diet were spectrally different, and the difference may have resulted from a greater intake by high- compared with low-juniper-consuming goats. Fecal near-infrared reflectance spectroscopy predictions of botanical composition of diets should be considered an interval scale of measurement.