Sample records for multivariate calibration methodology

  1. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  2. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.

    2016-03-01

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.

  3. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.

    PubMed

    Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J

    2016-03-15

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Salting-out assisted liquid-liquid extraction and partial least squares regression to assay low molecular weight polycyclic aromatic hydrocarbons leached from soils and sediments

    NASA Astrophysics Data System (ADS)

    Bressan, Lucas P.; do Nascimento, Paulo Cícero; Schmidt, Marcella E. P.; Faccin, Henrique; de Machado, Leandro Carvalho; Bohrer, Denise

    2017-02-01

    A novel method was developed to determine low molecular weight polycyclic aromatic hydrocarbons in aqueous leachates from soils and sediments using a salting-out assisted liquid-liquid extraction, synchronous fluorescence spectrometry and a multivariate calibration technique. Several experimental parameters were controlled and the optimum conditions were: sodium carbonate as the salting-out agent at concentration of 2 mol L- 1, 3 mL of acetonitrile as extraction solvent, 6 mL of aqueous leachate, vortexing for 5 min and centrifuging at 4000 rpm for 5 min. The partial least squares calibration was optimized to the lowest values of root mean squared error and five latent variables were chosen for each of the targeted compounds. The regression coefficients for the true versus predicted concentrations were higher than 0.99. Figures of merit for the multivariate method were calculated, namely sensitivity, multivariate detection limit and multivariate quantification limit. The selectivity was also evaluated and other polycyclic aromatic hydrocarbons did not interfere in the analysis. Likewise, high performance liquid chromatography was used as a comparative methodology, and the regression analysis between the methods showed no statistical difference (t-test). The proposed methodology was applied to soils and sediments of a Brazilian river and the recoveries ranged from 74.3% to 105.8%. Overall, the proposed methodology was suitable for the targeted compounds, showing that the extraction method can be applied to spectrofluorometric analysis and that the multivariate calibration is also suitable for these compounds in leachates from real samples.

  5. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data

    NASA Astrophysics Data System (ADS)

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-01

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.

  6. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data.

    PubMed

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-05

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  8. Reporting and Methodology of Multivariable Analyses in Prognostic Observational Studies Published in 4 Anesthesiology Journals: A Methodological Descriptive Review.

    PubMed

    Guglielminotti, Jean; Dechartres, Agnès; Mentré, France; Montravers, Philippe; Longrois, Dan; Laouénan, Cedric

    2015-10-01

    Prognostic research studies in anesthesiology aim to identify risk factors for an outcome (explanatory studies) or calculate the risk of this outcome on the basis of patients' risk factors (predictive studies). Multivariable models express the relationship between predictors and an outcome and are used in both explanatory and predictive studies. Model development demands a strict methodology and a clear reporting to assess its reliability. In this methodological descriptive review, we critically assessed the reporting and methodology of multivariable analysis used in observational prognostic studies published in anesthesiology journals. A systematic search was conducted on Medline through Web of Knowledge, PubMed, and journal websites to identify observational prognostic studies with multivariable analysis published in Anesthesiology, Anesthesia & Analgesia, British Journal of Anaesthesia, and Anaesthesia in 2010 and 2011. Data were extracted by 2 independent readers. First, studies were analyzed with respect to reporting of outcomes, design, size, methods of analysis, model performance (discrimination and calibration), model validation, clinical usefulness, and STROBE (i.e., Strengthening the Reporting of Observational Studies in Epidemiology) checklist. A reporting rate was calculated on the basis of 21 items of the aforementioned points. Second, they were analyzed with respect to some predefined methodological points. Eighty-six studies were included: 87.2% were explanatory and 80.2% investigated a postoperative event. The reporting was fairly good, with a median reporting rate of 79% (75% in explanatory studies and 100% in predictive studies). Six items had a reporting rate <36% (i.e., the 25th percentile), with some of them not identified in the STROBE checklist: blinded evaluation of the outcome (11.9%), reason for sample size (15.1%), handling of missing data (36.0%), assessment of colinearity (17.4%), assessment of interactions (13.9%), and calibration (34.9%). When reported, a few methodological shortcomings were observed, both in explanatory and predictive studies, such as an insufficient number of events of the outcome (44.6%), exclusion of cases with missing data (93.6%), or categorization of continuous variables (65.1%.). The reporting of multivariable analysis was fairly good and could be further improved by checking reporting guidelines and EQUATOR Network website. Limiting the number of candidate variables, including cases with missing data, and not arbitrarily categorizing continuous variables should be encouraged.

  9. Enzymatic electrochemical detection coupled to multivariate calibration for the determination of phenolic compounds in environmental samples.

    PubMed

    Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel

    2013-03-15

    An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. Published by Elsevier B.V.

  10. Optical and laser spectroscopic diagnostics for energy applications

    NASA Astrophysics Data System (ADS)

    Tripathi, Markandey Mani

    The continuing need for greater energy security and energy independence has motivated researchers to develop new energy technologies for better energy resource management and efficient energy usage. The focus of this dissertation is the development of optical (spectroscopic) sensing methodologies for various fuels, and energy applications. A fiber-optic NIR sensing methodology was developed for predicting water content in bio-oil. The feasibility of using the designed near infrared (NIR) system for estimating water content in bio-oil was tested by applying multivariate analysis to NIR spectral data. The calibration results demonstrated that the spectral information can successfully predict the bio-oil water content (from 16% to 36%). The effect of ultraviolet (UV) light on the chemical stability of bio-oil was studied by employing laser-induced fluorescence (LIF) spectroscopy. To simulate the UV light exposure, a laser in the UV region (325 nm) was employed for bio-oil excitation. The LIF, as a signature of chemical change, was recorded from bio-oil. From this study, it was concluded that phenols present in the bio-oil show chemical instability, when exposed to UV light. A laser-induced breakdown spectroscopy (LIBS)-based optical sensor was designed, developed, and tested for detection of four important trace impurities in rocket fuel (hydrogen). The sensor can simultaneously measure the concentrations of nitrogen, argon, oxygen, and helium in hydrogen from storage tanks and supply lines. The sensor had estimated lower detection limits of 80 ppm for nitrogen, 97 ppm for argon, 10 ppm for oxygen, and 25 ppm for helium. A chemiluminescence-based spectroscopic diagnostics were performed to measure equivalence ratios in methane-air premixed flames. A partial least-squares regression (PLS-R)-based multivariate sensing methodology was investigated. It was found that the equivalence ratios predicted with the PLS-R-based multivariate calibration model matched with the experimentally measured equivalence ratios within 7 %. A comparative study was performed for equivalence ratios measurement in atmospheric premixed methane-air flames with ungated LIBS and chemiluminescence spectroscopy. It was reported that LIBS-based calibration, which carries spectroscopic information from a "point-like-volume," provides better predictions of equivalence ratios compared to chemiluminescence-based calibration, which is essentially a "line-of-sight" measurement.

  11. Development of a Pattern Recognition Methodology for Determining Operationally Optimal Heat Balance Instrumentation Calibration Schedules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Beran; John Christenson; Dragos Nica

    2002-12-15

    The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.

  12. Near Infrared Spectroscopy Detection and Quantification of Herbal Medicines Adulterated with Sibutramine.

    PubMed

    da Silva, Neirivaldo Cavalcante; Honorato, Ricardo Saldanha; Pimentel, Maria Fernanda; Garrigues, Salvador; Cervera, Maria Luisa; de la Guardia, Miguel

    2015-09-01

    There is an increasing demand for herbal medicines in weight loss treatment. Some synthetic chemicals, such as sibutramine (SB), have been detected as adulterants in herbal formulations. In this study, two strategies using near infrared (NIR) spectroscopy have been developed to evaluate potential adulteration of herbal medicines with SB: a qualitative screening approach and a quantitative methodology based on multivariate calibration. Samples were composed by products commercialized as herbal medicines, as well as by laboratory adulterated samples. Spectra were obtained in the range of 14,000-4000 per cm. Using PLS-DA, a correct classification of 100% was achieved for the external validation set. In the quantitative approach, the root mean squares error of prediction (RMSEP), for both PLS and MLR models, was 0.2% w/w. The results prove the potential of NIR spectroscopy and multivariate calibration in quantifying sibutramine in adulterated herbal medicines samples. © 2015 American Academy of Forensic Sciences.

  13. Speciation of adsorbates on surface of solids by infrared spectroscopy and chemometrics.

    PubMed

    Vilmin, Franck; Bazin, Philippe; Thibault-Starzyk, Frédéric; Travert, Arnaud

    2015-09-03

    Speciation, i.e. identification and quantification, of surface species on heterogeneous surfaces by infrared spectroscopy is important in many fields but remains a challenging task when facing strongly overlapped spectra of multiple adspecies. Here, we propose a new methodology, combining state of the art instrumental developments for quantitative infrared spectroscopy of adspecies and chemometrics tools, mainly a novel data processing algorithm, called SORB-MCR (SOft modeling by Recursive Based-Multivariate Curve Resolution) and multivariate calibration. After formal transposition of the general linear mixture model to adsorption spectral data, the main issues, i.e. validity of Beer-Lambert law and rank deficiency problems, are theoretically discussed. Then, the methodology is exposed through application to two case studies, each of them characterized by a specific type of rank deficiency: (i) speciation of physisorbed water species over a hydrated silica surface, and (ii) speciation (chemisorption and physisorption) of a silane probe molecule over a dehydrated silica surface. In both cases, we demonstrate the relevance of this approach which leads to a thorough surface speciation based on comprehensive and fully interpretable multivariate quantitative models. Limitations and drawbacks of the methodology are also underlined. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Strategic development of a multivariate calibration model for the uniformity testing of tablets by transmission NIR analysis.

    PubMed

    Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T

    2015-05-01

    The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.

  15. Calibration sets and the accuracy of vibrational scaling factors: A case study with the X3LYP hybrid functional

    NASA Astrophysics Data System (ADS)

    Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.

    2010-09-01

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  16. Calibration sets and the accuracy of vibrational scaling factors: a case study with the X3LYP hybrid functional.

    PubMed

    Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S

    2010-09-21

    A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.

  17. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  18. Development of a multivariate calibration model for the determination of dry extract content in Brazilian commercial bee propolis extracts through UV-Vis spectroscopy

    NASA Astrophysics Data System (ADS)

    Barbeira, Paulo J. S.; Paganotti, Rosilene S. N.; Ássimos, Ariane A.

    2013-10-01

    This study had the objective of determining the content of dry extract of commercial alcoholic extracts of bee propolis through Partial Least Squares (PLS) multivariate calibration and electronic spectroscopy. The PLS model provided a good prediction of dry extract content in commercial alcoholic extracts of bee propolis in the range of 2.7 a 16.8% (m/v), presenting the advantage of being less laborious and faster than the traditional gravimetric methodology. The PLS model was optimized with outlier detection tests according to the ASTM E 1655-05. In this study it was possible to verify that a centrifugation stage is extremely important in order to avoid the presence of waxes, resulting in a more accurate model. Around 50% of the analyzed samples presented content of dry extract lower than the value established by Brazilian legislation, in most cases, the values found were different from the values claimed in the product's label.

  19. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  20. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  1. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    PubMed

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12% will be reflected in the dosimetry results. The calibration process must emulate the eventual CT dosimetry process by matching or excluding scatter when calibrating the MOSFETs. Finally, the authors recommend that the MOSFETs are energy calibrated approximately every 2500-3000 mV. © 2012 American Association of Physicists in Medicine.

  2. FT-IR spectroscopy and multivariate analysis as an auxiliary tool for diagnosis of mental disorders: Bipolar and schizophrenia cases

    NASA Astrophysics Data System (ADS)

    Ogruc Ildiz, G.; Arslan, M.; Unsalan, O.; Araujo-Andrade, C.; Kurt, E.; Karatepe, H. T.; Yilmaz, A.; Yalcinkaya, O. B.; Herken, H.

    2016-01-01

    In this study, a methodology based on Fourier-transform infrared spectroscopy and principal component analysis and partial least square methods is proposed for the analysis of blood plasma samples in order to identify spectral changes correlated with some biomarkers associated with schizophrenia and bipolarity. Our main goal was to use the spectral information for the calibration of statistical models to discriminate and classify blood plasma samples belonging to bipolar and schizophrenic patients. IR spectra of 30 samples of blood plasma obtained from each, bipolar and schizophrenic patients and healthy control group were collected. The results obtained from principal component analysis (PCA) show a clear discrimination between the bipolar (BP), schizophrenic (SZ) and control group' (CG) blood samples that also give possibility to identify three main regions that show the major differences correlated with both mental disorders (biomarkers). Furthermore, a model for the classification of the blood samples was calibrated using partial least square discriminant analysis (PLS-DA), allowing the correct classification of BP, SZ and CG samples. The results obtained applying this methodology suggest that it can be used as a complimentary diagnostic tool for the detection and discrimination of these mental diseases.

  3. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  4. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  5. Application of Fluorescence Spectrometry With Multivariate Calibration to the Enantiomeric Recognition of Fluoxetine in Pharmaceutical Preparations.

    PubMed

    Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana

    2016-04-01

    Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.

  6. Calibration of CORSIM models under saturated traffic flow conditions.

    DOT National Transportation Integrated Search

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  7. Applications of Quantum Cascade Laser Spectroscopy in the Analysis of Pharmaceutical Formulations.

    PubMed

    Galán-Freyle, Nataly J; Pacheco-Londoño, Leonardo C; Román-Ospino, Andrés D; Hernandez-Rivera, Samuel P

    2016-09-01

    Quantum cascade laser spectroscopy was used to quantify active pharmaceutical ingredient content in a model formulation. The analyses were conducted in non-contact mode by mid-infrared diffuse reflectance. Measurements were carried out at a distance of 15 cm, covering the spectral range 1000-1600 cm(-1) Calibrations were generated by applying multivariate analysis using partial least squares models. Among the figures of merit of the proposed methodology are the high analytical sensitivity equivalent to 0.05% active pharmaceutical ingredient in the formulation, high repeatability (2.7%), high reproducibility (5.4%), and low limit of detection (1%). The relatively high power of the quantum-cascade-laser-based spectroscopic system resulted in the design of detection and quantification methodologies for pharmaceutical applications with high accuracy and precision that are comparable to those of methodologies based on near-infrared spectroscopy, attenuated total reflection mid-infrared Fourier transform infrared spectroscopy, and Raman spectroscopy. © The Author(s) 2016.

  8. Headspace-programmed temperature vaporization-mass spectrometry for the rapid determination of possible volatile biomarkers of lung cancer in urine.

    PubMed

    Pérez Antón, Ana; Ramos, Álvaro García; Del Nogal Sánchez, Miguel; Pavón, José Luis Pérez; Cordero, Bernardo Moreno; Pozas, Ángel Pedro Crisolino

    2016-07-01

    We propose a new method for the rapid determination of five volatile compounds described in the literature as possible biomarkers of lung cancer in urine samples. The method is based on the coupling of a headspace sampler, a programmed temperature vaporizer in solvent-vent injection mode, and a mass spectrometer (HS-PTV-MS). This configuration is known as an electronic nose based on mass spectrometry. Once the method was developed, it was used for the analysis of urine samples from lung cancer patients and healthy individuals. Multivariate calibration models were employed to quantify the biomarker concentrations in the samples. The detection limits ranged between 0.16 and 21 μg/L. For the assignment of the samples to the patient group or the healthy individuals, the Wilcoxon signed-rank test was used, comparing the concentrations obtained with the median of a reference set of healthy individuals. To date, this is the first time that multivariate calibration and non-parametric methods have been combined to classify biological samples from profile signals obtained with an electronic nose. When significant differences in the concentration of one or more biomarkers were found with respect to the reference set, the sample is considered as a positive one and a new analysis was performed using a chromatographic method (HS-PTV-GC/MS) to confirm the result. The main advantage of the proposed HS-PTV-MS methodology is that no prior chromatographic separation and no sample manipulation are required, which allows an increase of the number of samples analyzed per hour and restricts the use of time-consuming techniques to only when necessary. Graphical abstract Schematic diagram of the developed methodology.

  9. Fresh Biomass Estimation in Heterogeneous Grassland Using Hyperspectral Measurements and Multivariate Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.

    2014-12-01

    Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.

  10. Novel hyperspectral prediction method and apparatus

    NASA Astrophysics Data System (ADS)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  11. Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization

    NASA Astrophysics Data System (ADS)

    Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija

    2017-07-01

    Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.

  12. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  13. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  14. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  15. Digital filtering and model updating methods for improving the robustness of near-infrared multivariate calibrations.

    PubMed

    Kramer, Kirsten E; Small, Gary W

    2009-02-01

    Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.

  16. Bivariate versus multivariate smart spectrophotometric calibration methods for the simultaneous determination of a quaternary mixture of mosapride, pantoprazole and their degradation products.

    PubMed

    Hegazy, M A; Yehia, A M; Moustafa, A A

    2013-05-01

    The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.

  17. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  18. Wavelet Analysis Used for Spectral Background Removal in the Determination of Glucose from Near-Infrared Single-Beam Spectra

    PubMed Central

    Wan, Boyong; Small, Gary W.

    2010-01-01

    Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than six months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. PMID:21035604

  19. Wavelet analysis used for spectral background removal in the determination of glucose from near-infrared single-beam spectra.

    PubMed

    Wan, Boyong; Small, Gary W

    2010-11-29

    Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than 6 months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Predicting microbiologically defined infection in febrile neutropenic episodes in children: global individual participant data multivariable meta-analysis

    PubMed Central

    Phillips, Robert S; Sung, Lillian; Amman, Roland A; Riley, Richard D; Castagnola, Elio; Haeusler, Gabrielle M; Klaassen, Robert; Tissing, Wim J E; Lehrnbecher, Thomas; Chisholm, Julia; Hakim, Hana; Ranasinghe, Neil; Paesmans, Marianne; Hann, Ian M; Stewart, Lesley A

    2016-01-01

    Background: Risk-stratified management of fever with neutropenia (FN), allows intensive management of high-risk cases and early discharge of low-risk cases. No single, internationally validated, prediction model of the risk of adverse outcomes exists for children and young people. An individual patient data (IPD) meta-analysis was undertaken to devise one. Methods: The ‘Predicting Infectious Complications in Children with Cancer' (PICNICC) collaboration was formed by parent representatives, international clinical and methodological experts. Univariable and multivariable analyses, using random effects logistic regression, were undertaken to derive and internally validate a risk-prediction model for outcomes of episodes of FN based on clinical and laboratory data at presentation. Results: Data came from 22 different study groups from 15 countries, of 5127 episodes of FN in 3504 patients. There were 1070 episodes in 616 patients from seven studies available for multivariable analysis. Univariable analyses showed associations with microbiologically defined infection (MDI) in many items, including higher temperature, lower white cell counts and acute myeloid leukaemia, but not age. Patients with osteosarcoma/Ewings sarcoma and those with more severe mucositis were associated with a decreased risk of MDI. The predictive model included: malignancy type, temperature, clinically ‘severely unwell', haemoglobin, white cell count and absolute monocyte count. It showed moderate discrimination (AUROC 0.723, 95% confidence interval 0.711–0.759) and good calibration (calibration slope 0.95). The model was robust to bootstrap and cross-validation sensitivity analyses. Conclusions: This new prediction model for risk of MDI appears accurate. It requires prospective studies assessing implementation to assist clinicians and parents/patients in individualised decision making. PMID:26954719

  1. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  2. Multivariate estimation of the limit of detection by orthogonal partial least squares in temperature-modulated MOX sensors.

    PubMed

    Burgués, Javier; Marco, Santiago

    2018-08-17

    Metal oxide semiconductor (MOX) sensors are usually temperature-modulated and calibrated with multivariate models such as partial least squares (PLS) to increase the inherent low selectivity of this technology. The multivariate sensor response patterns exhibit heteroscedastic and correlated noise, which suggests that maximum likelihood methods should outperform PLS. One contribution of this paper is the comparison between PLS and maximum likelihood principal components regression (MLPCR) in MOX sensors. PLS is often criticized by the lack of interpretability when the model complexity increases beyond the chemical rank of the problem. This happens in MOX sensors due to cross-sensitivities to interferences, such as temperature or humidity and non-linearity. Additionally, the estimation of fundamental figures of merit, such as the limit of detection (LOD), is still not standardized in multivariate models. Orthogonalization methods, such as orthogonal projection to latent structures (O-PLS), have been successfully applied in other fields to reduce the complexity of PLS models. In this work, we propose a LOD estimation method based on applying the well-accepted univariate LOD formulas to the scores of the first component of an orthogonal PLS model. The resulting LOD is compared to the multivariate LOD range derived from error-propagation. The methodology is applied to data extracted from temperature-modulated MOX sensors (FIS SB-500-12 and Figaro TGS 3870-A04), aiming at the detection of low concentrations of carbon monoxide in the presence of uncontrolled humidity (chemical noise). We found that PLS models were simpler and more accurate than MLPCR models. Average LOD values of 0.79 ppm (FIS) and 1.06 ppm (Figaro) were found using the approach described in this paper. These values were contained within the LOD ranges obtained with the error-propagation approach. The mean LOD increased to 1.13 ppm (FIS) and 1.59 ppm (Figaro) when considering validation samples collected two weeks after calibration, which represents a 43% and 46% degradation, respectively. The orthogonal score-plot was a very convenient tool to visualize MOX sensor data and to validate the LOD estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Determination of thiamine HCl and pyridoxine HCl in pharmaceutical preparations using UV-visible spectrophotometry and genetic algorithm based multivariate calibration methods.

    PubMed

    Ozdemir, Durmus; Dinc, Erdal

    2004-07-01

    Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.

  4. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  5. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  6. Multivariate calibration on NIR data: development of a model for the rapid evaluation of ethanol content in bakery products.

    PubMed

    Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena

    2007-11-05

    A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.

  7. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  8. Raman water vapor lidar calibration

    NASA Astrophysics Data System (ADS)

    Landulfo, E.; Da Costa, R. F.; Torres, A. S.; Lopes, F. J. S.; Whiteman, D. N.; Venable, D. D.

    2009-09-01

    We show here new results of a Raman LIDAR calibration methodology effort putting emphasis in the assessment of the cross-section ratio between water vapor and nitrogen by the use of a calibrated NIST traceable tungsten lamp. Therein we give a step by step procedure of how to employ such equipment by means of a mapping/scanning procedure over the receiving optics of a water vapor Raman LIDAR. This methodology has been independently used at Howard University Raman LIDAR and at IPEN Raman LIDAR what strongly supports its reproducibility and points towards an independently calibration methodology to be carried on within an experiment routine.

  9. Coping with matrix effects in headspace solid phase microextraction gas chromatography using multivariate calibration strategies.

    PubMed

    Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana

    2015-08-14

    SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Total anthocyanin content determination in intact açaí (Euterpe oleracea Mart.) and palmitero-juçara (Euterpe edulis Mart.) fruit using near infrared spectroscopy (NIR) and multivariate calibration.

    PubMed

    Inácio, Maria Raquel Cavalcanti; de Lima, Kássio Michell Gomes; Lopes, Valquiria Garcia; Pessoa, José Dalton Cruz; de Almeida Teixeira, Gustavo Henrique

    2013-02-15

    The aim of this study was to evaluate near-infrared reflectance spectroscopy (NIR), and multivariate calibration potential as a rapid method to determinate anthocyanin content in intact fruit (açaí and palmitero-juçara). Several multivariate calibration techniques, including partial least squares (PLS), interval partial least squares, genetic algorithm, successive projections algorithm, and net analyte signal were compared and validated by establishing figures of merit. Suitable results were obtained with the PLS model (four latent variables and 5-point smoothing) with a detection limit of 6.2 g kg(-1), limit of quantification of 20.7 g kg(-1), accuracy estimated as root mean square error of prediction of 4.8 g kg(-1), mean selectivity of 0.79 g kg(-1), sensitivity of 5.04×10(-3) g kg(-1), precision of 27.8 g kg(-1), and signal-to-noise ratio of 1.04×10(-3) g kg(-1). These results suggest NIR spectroscopy and multivariate calibration can be effectively used to determine anthocyanin content in intact açaí and palmitero-juçara fruit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Linking the Weather Generator with Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan

    2013-04-01

    One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  12. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  13. Methodological challenges to multivariate syndromic surveillance: a case study using Swiss animal health data.

    PubMed

    Vial, Flavie; Wei, Wei; Held, Leonhard

    2016-12-20

    In an era of ubiquitous electronic collection of animal health data, multivariate surveillance systems (which concurrently monitor several data streams) should have a greater probability of detecting disease events than univariate systems. However, despite their limitations, univariate aberration detection algorithms are used in most active syndromic surveillance (SyS) systems because of their ease of application and interpretation. On the other hand, a stochastic modelling-based approach to multivariate surveillance offers more flexibility, allowing for the retention of historical outbreaks, for overdispersion and for non-stationarity. While such methods are not new, they are yet to be applied to animal health surveillance data. We applied an example of such stochastic model, Held and colleagues' two-component model, to two multivariate animal health datasets from Switzerland. In our first application, multivariate time series of the number of laboratories test requests were derived from Swiss animal diagnostic laboratories. We compare the performance of the two-component model to parallel monitoring using an improved Farrington algorithm and found both methods yield a satisfactorily low false alarm rate. However, the calibration test of the two-component model on the one-step ahead predictions proved satisfactory, making such an approach suitable for outbreak prediction. In our second application, the two-component model was applied to the multivariate time series of the number of cattle abortions and the number of test requests for bovine viral diarrhea (a disease that often results in abortions). We found that there is a two days lagged effect from the number of abortions to the number of test requests. We further compared the joint modelling and univariate modelling of the number of laboratory test requests time series. The joint modelling approach showed evidence of superiority in terms of forecasting abilities. Stochastic modelling approaches offer the potential to address more realistic surveillance scenarios through, for example, the inclusion of times series specific parameters, or of covariates known to have an impact on syndrome counts. Nevertheless, many methodological challenges to multivariate surveillance of animal SyS data still remain. Deciding on the amount of corroboration among data streams that is required to escalate into an alert is not a trivial task given the sparse data on the events under consideration (e.g. disease outbreaks).

  14. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    PubMed

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Laser ablation molecular isotopic spectroscopy (LAMIS) towards the determination of multivariate LODs via PLS calibration model of 10B and 11B Boric acid mixtures

    NASA Astrophysics Data System (ADS)

    Harris, C. D.; Profeta, Luisa T. M.; Akpovo, Codjo A.; Johnson, Lewis; Stowe, Ashley C.

    2017-05-01

    A calibration model was created to illustrate the detection capabilities of laser ablation molecular isotopic spectroscopy (LAMIS) discrimination in isotopic analysis. The sample set contained boric acid pellets that varied in isotopic concentrations of 10B and 11B. Each sample set was interrogated with a Q-switched Nd:YAG ablation laser operating at 532 nm. A minimum of four band heads of the β system B2∑ -> Χ2∑transitions were identified and verified with previous literature on BO molecular emission lines. Isotopic shifts were observed in the spectra for each transition and used as the predictors in the calibration model. The spectra along with their respective 10/11B isotopic ratios were analyzed using Partial Least Squares Regression (PLSR). An IUPAC novel approach for determining a multivariate Limit of Detection (LOD) interval was used to predict the detection of the desired isotopic ratios. The predicted multivariate LOD is dependent on the variation of the instrumental signal and other composites in the calibration model space.

  16. A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.

    PubMed

    Workman, Jerome J

    2018-03-01

    Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.

  17. Forensic discrimination of blue ballpoint pens on documents by laser ablation inductively coupled plasma mass spectrometry and multivariate analysis.

    PubMed

    Alamilla, Francisco; Calcerrada, Matías; García-Ruiz, Carmen; Torre, Mercedes

    2013-05-10

    The differentiation of blue ballpoint pen inks written on documents through an LA-ICP-MS methodology is proposed. Small common office paper portions containing ink strokes from 21 blue pens of known origin were cut and measured without any sample preparation. In a first step, Mg, Ca and Sr were proposed as internal standards (ISs) and used in order to normalize elemental intensities and subtract background signals from the paper. Then, specific criteria were designed and employed to identify target elements (Li, V, Mn, Co, Ni, Cu, Zn, Zr, Sn, W and Pb) which resulted independent of the IS chosen in a 98% of the cases and allowed a qualitative clustering of the samples. In a second step, an elemental-related ratio (ink ratio) based on the targets previously identified was used to obtain mass independent intensities and perform pairwise comparisons by means of multivariate statistical analyses (MANOVA, Tukey's HSD and T2 Hotelling). This treatment improved the discrimination power (DP) and provided objective results, achieving a complete differentiation among different brands and a partial differentiation within pen inks from the same brands. The designed data treatment, together with the use of multivariate statistical tools, represents an easy and useful tool for differentiating among blue ballpoint pen inks, with hardly sample destruction and without the need for methodological calibrations, being its use potentially advantageous from a forensic-practice standpoint. To test the procedure, it was applied to analyze real handwritten questioned contracts, previously studied by the Department of Forensic Document Exams of the Criminalistics Service of Civil Guard (Spain). The results showed that all questioned ink entries were clustered in the same group, being those different from the remaining ink on the document. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Multivariate calibration in Laser-Induced Breakdown Spectroscopy quantitative analysis: The dangers of a 'black box' approach and how to avoid them

    NASA Astrophysics Data System (ADS)

    Safi, A.; Campanella, B.; Grifoni, E.; Legnaioli, S.; Lorenzetti, G.; Pagnotta, S.; Poggialini, F.; Ripoll-Seguer, L.; Hidalgo, M.; Palleschi, V.

    2018-06-01

    The introduction of multivariate calibration curve approach in Laser-Induced Breakdown Spectroscopy (LIBS) quantitative analysis has led to a general improvement of the LIBS analytical performances, since a multivariate approach allows to exploit the redundancy of elemental information that are typically present in a LIBS spectrum. Software packages implementing multivariate methods are available in the most diffused commercial and open source analytical programs; in most of the cases, the multivariate algorithms are robust against noise and operate in unsupervised mode. The reverse of the coin of the availability and ease of use of such packages is the (perceived) difficulty in assessing the reliability of the results obtained which often leads to the consideration of the multivariate algorithms as 'black boxes' whose inner mechanism is supposed to remain hidden to the user. In this paper, we will discuss the dangers of a 'black box' approach in LIBS multivariate analysis, and will discuss how to overcome them using the chemical-physical knowledge that is at the base of any LIBS quantitative analysis.

  19. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  20. Interference modelling, experimental design and pre-concentration steps in validation of the Fenton's reagent for pesticides determination.

    PubMed

    Ostra, Miren; Ubide, Carlos; Zuriarrain, Juan

    2007-02-12

    The determination of atrazine in real samples (commercial pesticide preparations and water matrices) shows how the Fenton's reagent can be used with analytical purposes when kinetic methodology and multivariate calibration methods are applied. Also, binary mixtures of atrazine-alachlor and atrazine-bentazone in pesticide preparations have been resolved. The work shows the way in which interferences and the matrix effect can be modelled. Experimental design has been used to optimize experimental conditions, including the effect of solvent (methanol) used for extraction of atrazine from the sample. The determination of pesticides in commercial preparations was accomplished without any pre-treatment of sample apart from evaporation of solvent; the calibration model was developed for concentration ranges between 0.46 and 11.6 x 10(-5) mol L(-1) with mean relative errors under 4%. Solid-phase extraction is used for pre-concentration of atrazine in water samples through C(18) disks, and the concentration range for determination was established between 4 and 115 microg L(-1) approximately. Satisfactory results for recuperation of atrazine were always obtained.

  1. An overview of the dynamic calibration of piezoelectric pressure transducers

    NASA Astrophysics Data System (ADS)

    Theodoro, F. R. F.; Reis, M. L. C. C.; d’ Souto, C.

    2018-03-01

    Dynamic calibration is a research area that is still under development and is of great interest to aerospace and automotive industries. This study discusses some concepts regarding dynamic measurements of pressure quantities and presents an overview of dynamic calibration of pressure transducers. Studies conducted by the Institute of Aeronautics and Space focusing on research regarding piezoelectric pressure transducer calibration in shock tube are presented. We employed the Guide to the Expression of Uncertainty and a Monte Carlo Method in the methodology. The results show that both device and methodology employed are adequate to calibrate the piezoelectric sensor.

  2. Determination of rice syrup adulterant concentration in honey using three-dimensional fluorescence spectra and multivariate calibrations

    NASA Astrophysics Data System (ADS)

    Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen

    2014-10-01

    To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower ± longan ± buckwheat ± rape) model were achieved as follow: RMSEP = 0.0235 and R = 0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration.

  3. Development of a non-destructive method for determining protein nitrogen in a yellow fever vaccine by near infrared spectroscopy and multivariate calibration.

    PubMed

    Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen

    2018-08-05

    Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Augmenting understanding of the relationship between situation awareness and confidence using calibration analysis.

    PubMed

    Lichacz, Frederick M J

    2008-10-01

    The present study represents a preliminary examination of the relationship between situation awareness (SA) and confidence within a distributed information-sharing environment using the calibration methodology. The calibration methodology uses the indices of calibration, resolution and over/under-confidence to examine the relationship between the accuracy of the responses and the degree of confidence that one has in these responses, which leads to a measure of an operator's meta-SA. The results of this study revealed that, although the participants were slightly overconfident in their responses, overall they demonstrated good meta-SA. That is, the participants' subjective probability judgements corresponded to their pattern of SA response accuracy. It is concluded that the use of calibration analysis represents a better methodology for expanding our understanding of the relationship between SA and confidence and ultimately how this relationship can impact decision-making and performance in applied settings than can be achieved by examining SA measures alone.

  5. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    PubMed

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  6. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  7. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  8. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  9. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  10. Laboratory calibration of pyrgeometers with known spectral responsivities.

    PubMed

    Gröbner, Julian; Los, Alexander

    2007-10-20

    A methodology is presented to calibrate pyrgeometers measuring atmospheric long-wave radiation, if their spectral dome transmission is known. The new calibration procedure is based on a black-body cavity to retrieve the sensitivity of the pyrgeometer, combined with calculated atmospheric long-wave spectra to determine a correction function in dependence of the integrated atmospheric water vapor to convert Planck radiation spectra to atmospheric long-wave spectra. The methodology was validated with two custom CG4 pyrgeometers with known dome transmissions by a comparison to the World Infrared Standard Group of Pyrgeometers at the World Radiation Center-Infrared Radiometry Section. The responses retrieved using the new laboratory calibration agree to within 1% with the responses determined by a comparison to the WISG, which is well within the uncertainties of both methodologies.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jornet, N; Carrasco de Fez, P; Jordi, O

    Purpose: To evaluate the accuracy in total scatter factor (Sc,p) determination for small fields using commercial plastic scintillator detector (PSD). The manufacturer's spectral discrimination method to subtract Cerenkov light from the signal is discussed. Methods: Sc,p for field sizes ranging from 0.5 to 10 cm were measured using PSD Exradin (Standard Imaging) connected to two channel electrometer measuring the signals in two different spectral regions to subtract the Cerenkov signal from the PSD signal. A Pinpoint ionisation chamber 31006 (PTW) and a non-shielded semiconductor detector EFD (Scanditronix) were used for comparison. Measures were performed for a 6 MV X-ray beam.more » The Sc,p are measured at 10 cm depth in water for a SSD=100 cm and normalized to a 10'10 cm{sup 2} field size at the isocenter. All detectors were placed with their symmetry axis parallel to the beam axis.We followed the manufacturer's recommended calibration methodology to subtract the Cerenkov contribution to the signal as well as a modified method using smaller field sizes. The Sc,p calculated by using both calibration methodologies were compared. Results: Sc,p measured with the semiconductor and the PinPoint detectors agree, within 1.5%, for field sizes between 10'10 and 1'1 cm{sup 2}. Sc,p measured with the PSD using the manufacturer's calibration methodology were systematically 4% higher than those measured with the semiconductor detector for field sizes smaller than 5'5 cm{sup 2}. By using a modified calibration methodology for smalls fields and keeping the manufacturer calibration methodology for fields larger than 5'5cm{sup 2} field Sc,p matched semiconductor results within 2% field sizes larger than 1.5 cm. Conclusion: The calibration methodology proposed by the manufacturer is not appropriate for dose measurements in small fields. The calibration parameters are not independent of the incident radiation spectrum for this PSD. This work was partially financed by grant 2012 of Barcelona board of the AECC.« less

  12. Methodology for the Calibration of the Data Acquisition with a Six-Degree-of-Freedom Acceleration Measurement Device

    DOT National Transportation Integrated Search

    1989-06-01

    This report describes a methodology for calibrating and gathering data with a six-degree-of-freedom acceleration measurement device that is intended to measure head acceleration of anthropomorphic dummies and human volunteers in automotive crash test...

  13. Clinical results from a noninvasive blood glucose monitor

    NASA Astrophysics Data System (ADS)

    Blank, Thomas B.; Ruchti, Timothy L.; Lorenz, Alex D.; Monfre, Stephen L.; Makarewicz, M. R.; Mattu, Mutua; Hazen, Kevin

    2002-05-01

    Non-invasive blood glucose monitoring has long been proposed as a means for advancing the management of diabetes through increased measurement and control. The use of a near-infrared, NIR, spectroscopy based methodology for noninvasive monitoring has been pursued by a number of groups. The accuracy of the NIR measurement technology is limited by challenges related to the instrumentation, the heterogeneity and time-variant nature of skin tissue, and the complexity of the calibration methodology. In this work, we discuss results from a clinical study that targeted the evaluation of individual calibrations for each subject based on a series of controlled calibration visits. While the customization of the calibrations to individuals was intended to reduce model complexity, the extensive requirements for each individual set of calibration data were difficult to achieve and required several days of measurement. Through the careful selection of a small subset of data from all samples collected on the 138 study participants in a previous study, we have developed a methodology for applying a single standard calibration to multiple persons. The standard calibrations have been applied to a plurality of individuals and shown to be persistent over periods greater than 24 weeks.

  14. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  15. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study.

    PubMed

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2017-01-05

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Accuracy enhancement of a multivariate calibration for lead determination in soils by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Zaytsev, Sergey M.; Krylov, Ivan N.; Popov, Andrey M.; Zorov, Nikita B.; Labutin, Timur A.

    2018-02-01

    We have investigated matrix effects and spectral interferences on example of lead determination in different types of soils by laser induced breakdown spectroscopy (LIBS). Comparison between analytical performances of univariate and multivariate calibrations with the use of different laser wavelength for ablation (532, 355 and 266 nm) have been reported. A set of 17 soil samples (Ca-rich, Fe-rich, lean soils etc., 8.5-280 ppm of Pb) was involved into construction of the calibration models. Spectral interferences from main components (Ca, Fe, Ti, Mg) and trace components (Mn, Nb, Zr) were estimated by spectra modeling, and they were a reason for significant differences between the univariate calibration models obtained for a three different soil types (black, red, gray) separately. Implementation of 3rd harmonic of Nd:YAG laser in combination with multivariate calibration model based on PCR with 3 principal components provided the best analytical results: the RMSEC has been lowered down to 8 ppm. The sufficient improvement of the relative uncertainty (up to 5-10%) in comparison with univariate calibration was observed at the Pb concentration level > 50 ppm, while the problem of accuracy still remains for some samples with Pb concentration at the 20 ppm level. We have also discussed a few possible ways to estimate LOD without a blank sample. The most rigorous criterion has resulted in LOD of Pb in soils being 13 ppm. Finally, a good agreement between the values of lead content predicted by LIBS (46 ± 5 ppm) and XRF (42.1 ± 3.3 ppm) in the unknown soil sample from Lomonosov Moscow State University area was demonstrated.

  17. Improved intracellular PHA determinations with novel spectrophotometric quantification methodologies based on Sudan black dye.

    PubMed

    Porras, Mauricio A; Villar, Marcelo A; Cubitto, María A

    2018-05-01

    The presence of intracellular polyhydroxyalkanoates (PHAs) is usually studied using Sudan black dye solution (SB). In a previous work it was shown that the PHA could be directly quantified using the absorbance of SB fixed by PHA granules in wet cell samples. In the present paper, the optimum SB amount and the optimum conditions to be used for SB assays were determined following an experimental design by hybrid response surface methodology and desirability-function. In addition, a new methodology was developed in which it is shown that the amount of SB fixed by PHA granules can also be determined indirectly through the absorbance of the supernatant obtained from the stained cell samples. This alternative methodology allows a faster determination of the PHA content (involving 23 and 42 min for indirect and direct determinations, respectively), and can be undertaken by means of basic laboratory equipment and reagents. The correlation between PHA content in wet cell samples and the spectra of the SB stained supernatant was determined by means of multivariate and linear regression analysis. The best calibration adjustment (R 2  = 0.91, RSE: 1.56%), and the good PHA prediction obtained (RSE = 1.81%), shows that the proposed methodology constitutes a reasonably precise way for PHA content determination. Thus, this methodology could anticipate the probable results of the above mentioned direct PHA determination. Compared with the most used techniques described in the scientific literature, the combined implementation of these two methodologies seems to be one of the most economical and environmentally friendly, suitable for rapid monitoring of the intracellular PHA content. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less

  19. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  20. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibratedmore » indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.« less

  1. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  2. Separation of motor oils, oily wastes and hydrocarbons from contaminated water by sorption on chrome shavings.

    PubMed

    Gammoun, A; Tahiri, S; Albizane, A; Azzi, M; Moros, J; Garrigues, S; de la Guardia, M

    2007-06-25

    In this paper, the ability of chrome shavings to remove motor oils, oily wastes and hydrocarbons from water has been studied. To determine amount of hydrocarbons sorbed on tanned wastes, a FT-NIR methodology was used and a multivariate calibration based on partial least squares (PLS) was employed for data treatment. The light density, porous tanned waste granules float on the surface of water and remove hydrocarbons and oil films. Wastes fibers from tannery industry have high sorption capacity. These tanned solid wastes are capable of absorbing many times their weight in oil or hydrocarbons (6.5-7.6g of oil and 6.3g of hydrocarbons per gram of chrome shavings). The removal efficiency of the pollutants from water is complete. The sorption of pollutants is a quasi-instantaneous process.

  3. Comparison of calibration strategies for optical 3D scanners based on structured light projection using a new evaluation methodology

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Ölsner, Sandy; Kühmstedt, Peter; Notni, Gunther

    2017-06-01

    In this paper a new evaluation strategy for optical 3D scanners based on structured light projection is introduced. It can be used for the characterization of the expected measurement accuracy. Compared to the procedure proposed in the VDI/VDE guidelines for optical 3D measurement systems based on area scanning it requires less effort and provides more impartiality. The methodology is suitable for the evaluation of sets of calibration parameters, which mainly determine the quality of the measurement result. It was applied to several calibrations of a mobile stereo camera based optical 3D scanner. The performed calibrations followed different strategies regarding calibration bodies and arrangement of the observed scene. The results obtained by the different calibration strategies are discussed and suggestions concerning future work on this area are given.

  4. Partial Least Squares Calibration Modeling Towards the Multivariate Limit of Detection for Enriched Isotopic Mixtures via Laser Ablation Molecular Isotopic Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Candace; Profeta, Luisa; Akpovo, Codjo

    The psuedo univariate limit of detection was calculated to compare to the multivariate interval. ompared with results from the psuedounivariate LOD, the multivariate LOD includes other factors (i.e. signal uncertainties) and the reveals the significance in creating models that not only use the analyte’s emission line but also its entire molecular spectra.

  5. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    PubMed

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.

  6. Simultaneous Determination of Metamizole, Thiamin and Pyridoxin Using UV-Spectroscopy in Combination with Multivariate Calibration

    PubMed Central

    Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul

    2015-01-01

    Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934

  7. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.

    PubMed

    Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A

    2017-04-15

    Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    PubMed

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  11. Classical vs. evolved quenching parameters and procedures in scintillation measurements: The importance of ionization quenching

    NASA Astrophysics Data System (ADS)

    Bagán, H.; Tarancón, A.; Rauret, G.; García, J. F.

    2008-07-01

    The quenching parameters used to model detection efficiency variations in scintillation measurements have not evolved since the decade of 1970s. Meanwhile, computer capabilities have increased enormously and ionization quenching has appeared in practical measurements using plastic scintillation. This study compares the results obtained in activity quantification by plastic scintillation of 14C samples that contain colour and ionization quenchers, using classical (SIS, SCR-limited, SCR-non-limited, SIS(ext), SQP(E)) and evolved (MWA-SCR and WDW) parameters and following three calibration approaches: single step, which does not take into account the quenching mechanism; two steps, which takes into account the quenching phenomena; and multivariate calibration. Two-step calibration (ionization followed by colour) yielded the lowest relative errors, which means that each quenching phenomenon must be specifically modelled. In addition, the sample activity was quantified more accurately when the evolved parameters were used. Multivariate calibration-PLS also yielded better results than those obtained using classical parameters, which confirms that the quenching phenomena must be taken into account. The detection limits for each calibration method and each parameter were close to those obtained theoretically using the Currie approach.

  12. Calibration of a Six-Degree-of-Freedom Acceleration Measurement Device

    DOT National Transportation Integrated Search

    1994-12-01

    This report describes the calibration of a six-degree-of-freedom acceleration measurement system designed for use in the measurement of linear and angular head accelerations of anthropomorphic dummies during crash tests. The calibration methodology, ...

  13. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression

    USDA-ARS?s Scientific Manuscript database

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly ...

  14. Membrane Introduction Mass Spectrometry Combined with an Orthogonal Partial-Least Squares Calibration Model for Mixture Analysis.

    PubMed

    Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu

    2017-01-01

    The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.

  15. Strategy for design NIR calibration sets based on process spectrum and model space: An innovative approach for process analytical technology.

    PubMed

    Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M

    2015-10-10

    The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Simultaneous determination of rifampicin, isoniazid and pyrazinamide in tablet preparations by multivariate spectrophotometric calibration.

    PubMed

    Goicoechea, H C; Olivieri, A C

    1999-08-01

    The use of multivariate spectrophotometric calibration is presented for the simultaneous determination of the active components of tablets used in the treatment of pulmonary tuberculosis. The resolution of ternary mixtures of rifampicin, isoniazid and pyrazinamide has been accomplished by using partial least squares (PLS-1) regression analysis. Although the components show an important degree of spectral overlap, they have been simultaneously determined with high accuracy and precision, rapidly and with no need of nonaqueous solvents for dissolving the samples. No interference has been observed from the tablet excipients. A comparison is presented with the related multivariate method of classical least squares (CLS) analysis, which is shown to yield less reliable results due to the severe spectral overlap among the studied compounds. This is highlighted in the case of isoniazid, due to the small absorbances measured for this component.

  17. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  19. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  20. A proposed standard methodology for estimating the wounding capacity of small calibre projectiles or other missiles.

    PubMed

    Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T

    1982-01-01

    A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.

  1. Methodological accuracy of image-based electron density assessment using dual-energy computed tomography.

    PubMed

    Möhler, Christian; Wohlfahrt, Patrick; Richter, Christian; Greilich, Steffen

    2017-06-01

    Electron density is the most important tissue property influencing photon and ion dose distributions in radiotherapy patients. Dual-energy computed tomography (DECT) enables the determination of electron density by combining the information on photon attenuation obtained at two different effective x-ray energy spectra. Most algorithms suggested so far use the CT numbers provided after image reconstruction as input parameters, i.e., are imaged-based. To explore the accuracy that can be achieved with these approaches, we quantify the intrinsic methodological and calibration uncertainty of the seemingly simplest approach. In the studied approach, electron density is calculated with a one-parametric linear superposition ('alpha blending') of the two DECT images, which is shown to be equivalent to an affine relation between the photon attenuation cross sections of the two x-ray energy spectra. We propose to use the latter relation for empirical calibration of the spectrum-dependent blending parameter. For a conclusive assessment of the electron density uncertainty, we chose to isolate the purely methodological uncertainty component from CT-related effects such as noise and beam hardening. Analyzing calculated spectrally weighted attenuation coefficients, we find universal applicability of the investigated approach to arbitrary mixtures of human tissue with an upper limit of the methodological uncertainty component of 0.2%, excluding high-Z elements such as iodine. The proposed calibration procedure is bias-free and straightforward to perform using standard equipment. Testing the calibration on five published data sets, we obtain very small differences in the calibration result in spite of different experimental setups and CT protocols used. Employing a general calibration per scanner type and voltage combination is thus conceivable. Given the high suitability for clinical application of the alpha-blending approach in combination with a very small methodological uncertainty, we conclude that further refinement of image-based DECT-algorithms for electron density assessment is not advisable. © 2017 American Association of Physicists in Medicine.

  2. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  3. Multivariate calibration standardization across instruments for the determination of glucose by Fourier transform near-infrared spectrometry.

    PubMed

    Zhang, Lin; Small, Gary W; Arnold, Mark A

    2003-11-01

    The transfer of multivariate calibration models is investigated between a primary (A) and two secondary Fourier transform near-infrared (near-IR) spectrometers (B, C). The application studied in this work is the use of bands in the near-IR combination region of 5000-4000 cm(-)(1) to determine physiological levels of glucose in a buffered aqueous matrix containing varying levels of alanine, ascorbate, lactate, triacetin, and urea. The three spectrometers are used to measure 80 samples produced through a randomized experimental design that minimizes correlations between the component concentrations and between the concentrations of glucose and water. Direct standardization (DS), piecewise direct standardization (PDS), and guided model reoptimization (GMR) are evaluated for use in transferring partial least-squares calibration models developed with the spectra of 64 samples from the primary instrument to the prediction of glucose concentrations in 16 prediction samples measured with each secondary spectrometer. The three algorithms are evaluated as a function of the number of standardization samples used in transferring the calibration models. Performance criteria for judging the success of the calibration transfer are established as the standard error of prediction (SEP) for internal calibration models built with the spectra of the 64 calibration samples collected with each secondary spectrometer. These SEP values are 1.51 and 1.14 mM for spectrometers B and C, respectively. When calibration standardization is applied, the GMR algorithm is observed to outperform DS and PDS. With spectrometer C, the calibration transfer is highly successful, producing an SEP value of 1.07 mM. However, an SEP of 2.96 mM indicates unsuccessful calibration standardization with spectrometer B. This failure is attributed to differences in the variance structure of the spectra collected with spectrometers A and B. Diagnostic procedures are presented for use with the GMR algorithm that forecasts the successful calibration transfer with spectrometer C and the unsatisfactory results with spectrometer B.

  4. METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL

    EPA Science Inventory

    The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...

  5. Quantitative analysis of Sudan dye adulteration in paprika powder using FTIR spectroscopy.

    PubMed

    Lohumi, Santosh; Joshi, Ritu; Kandpal, Lalit Mohan; Lee, Hoonsoo; Kim, Moon S; Cho, Hyunjeong; Mo, Changyeun; Seo, Young-Wook; Rahman, Anisur; Cho, Byoung-Kwan

    2017-05-01

    As adulteration of foodstuffs with Sudan dye, especially paprika- and chilli-containing products, has been reported with some frequency, this issue has become one focal point for addressing food safety. FTIR spectroscopy has been used extensively as an analytical method for quality control and safety determination for food products. Thus, the use of FTIR spectroscopy for rapid determination of Sudan dye in paprika powder was investigated in this study. A net analyte signal (NAS)-based methodology, named HLA/GO (hybrid linear analysis in the literature), was applied to FTIR spectral data to predict Sudan dye concentration. The calibration and validation sets were designed to evaluate the performance of the multivariate method. The obtained results had a high determination coefficient (R 2 ) of 0.98 and low root mean square error (RMSE) of 0.026% for the calibration set, and an R 2 of 0.97 and RMSE of 0.05% for the validation set. The model was further validated using a second validation set and through the figures of merit, such as sensitivity, selectivity, and limits of detection and quantification. The proposed technique of FTIR combined with HLA/GO is rapid, simple and low cost, making this approach advantageous when compared with the main alternative methods based on liquid chromatography (LC) techniques.

  6. Firefly as a novel swarm intelligence variable selection method in spectroscopy.

    PubMed

    Goodarzi, Mohammad; dos Santos Coelho, Leandro

    2014-12-10

    A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.

  7. Calibration Adjustments to the MODIS Aqua Ocean Color Bands

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard

    2012-01-01

    After the end of the SeaWiFS mission in 2010 and the MERIS mission in 2012, the ocean color products of the MODIS on Aqua are the only remaining source to continue the ocean color climate data record until the VIIRS ocean color products become operational (expected for summer 2013). The MODIS on Aqua is well beyond its expected lifetime, and the calibration accuracy of the short wavelengths (412nm and 443nm) has deteriorated in recent years_ Initially, SeaWiFS data were used to improve the MODIS Aqua calibration, but this solution was not applicable after the end of the SeaWiFS mission_ In 2012, a new calibration methodology was applied by the MODIS calibration and support team using desert sites to improve the degradation trending_ This presentation presents further improvements to this new approach. The 2012 reprocessing of the MODIS Aqua ocean color products is based on the new methodology.

  8. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  9. Overview of hypersonic CFD code calibration studies

    NASA Technical Reports Server (NTRS)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  10. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    NASA Astrophysics Data System (ADS)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  11. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  12. Geometrical Characterisation of a 2D Laser System and Calibration of a Cross-Grid Encoder by Means of a Self-Calibration Methodology

    PubMed Central

    Torralba, Marta; Díaz-Pérez, Lucía C.

    2017-01-01

    This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239

  13. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.« less

  14. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  15. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  16. Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter

    NASA Astrophysics Data System (ADS)

    Hibert, Kurt James

    Cloud condensation nuclei (CCN) serve as the nucleation sites for the condensation of water vapor in Earth's atmosphere and are important for their effect on climate and weather. The influence of CCN on cloud radiative properties (aerosol indirect effect) is the most uncertain of quantified radiative forcing changes that have occurred since pre-industrial times. CCN influence the weather because intrinsic and extrinsic aerosol properties affect cloud formation and precipitation development. To quantify these effects, it is necessary to accurately measure CCN, which requires accurate calibrations using a consistent methodology. Furthermore, the calibration uncertainties are required to compare measurements from different field projects. CCN uncertainties also aid the integration of CCN measurements with atmospheric models. The commercially available Droplet Measurement Technologies (DMT) CCN Counter is used by many research groups, so it is important to quantify its calibration uncertainty. Uncertainties in the calibration of the DMT CCN counter exist in the flow rate and supersaturation values. The concentration depends on the accuracy of the flow rate calibration, which does not have a large (4.3 %) uncertainty. The supersaturation depends on chamber pressure, temperature, and flow rate. The supersaturation calibration is a complex process since the chamber's supersaturation must be inferred from a temperature difference measurement. Additionally, calibration errors can result from the Kohler theory assumptions, fitting methods utilized, the influence of multiply-charged particles, and calibration points used. In order to determine the calibration uncertainties and the pressure dependence of the supersaturation calibration, three calibrations are done at each pressure level: 700, 840, and 980 hPa. Typically 700 hPa is the pressure used for aircraft measurements in the boundary layer, 840 hPa is the calibration pressure at DMT in Boulder, CO, and 980 hPa is the average surface pressure at Grand Forks, ND. The supersaturation calibration uncertainty is 2.3, 3.1, and 4.4 % for calibrations done at 700, 840, and 980 hPa respectively. The supersaturation calibration change with pressure is on average 0.047 % supersaturation per 100 hPa. The supersaturation calibrations done at UND are 42-45 % lower than supersaturation calibrations done at DMT approximately 1 year previously. Performance checks confirmed that all major leaks developed during shipping were fixed before conducting the supersaturation calibrations. Multiply-charged particles passing through the Electrostatic Classifier may have influenced DMT's activation curves, which is likely part of the supersaturation calibration difference. Furthermore, the fitting method used to calculate the activation size and the limited calibration points are likely significant sources of error in DMT's supersaturation calibration. While the DMT CCN counter's calibration uncertainties are relatively small, and the pressure dependence is easily accounted for, the calibration methodology used by different groups can be very important. The insights gained from the careful calibration of the DMT CCN counter indicate that calibration of scientific instruments using complex methodology is not trivial.

  17. Measurement of non-sugar solids content in Chinese rice wine using near infrared spectroscopy combined with an efficient characteristic variables selection algorithm.

    PubMed

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng

    2015-01-01

    The non-sugar solids (NSS) content is one of the most important nutrition indicators of Chinese rice wine. This study proposed a rapid method for the measurement of NSS content in Chinese rice wine using near infrared (NIR) spectroscopy. We also systemically studied the efficient spectral variables selection algorithms that have to go through modeling. A new algorithm of synergy interval partial least square with competitive adaptive reweighted sampling (Si-CARS-PLS) was proposed for modeling. The performance of the final model was back-evaluated using root mean square error of calibration (RMSEC) and correlation coefficient (Rc) in calibration set and similarly tested by mean square error of prediction (RMSEP) and correlation coefficient (Rp) in prediction set. The optimum model by Si-CARS-PLS algorithm was achieved when 7 PLS factors and 18 variables were included, and the results were as follows: Rc=0.95 and RMSEC=1.12 in the calibration set, Rp=0.95 and RMSEP=1.22 in the prediction set. In addition, Si-CARS-PLS algorithm showed its superiority when compared with the commonly used algorithms in multivariate calibration. This work demonstrated that NIR spectroscopy technique combined with a suitable multivariate calibration algorithm has a high potential in rapid measurement of NSS content in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong

    2004-03-01

    Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.

  19. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  20. An overview of sensor calibration inter-comparison and applications

    USGS Publications Warehouse

    Xiong, Xiaoxiong; Cao, Changyong; Chander, Gyanesh

    2010-01-01

    Long-term climate data records (CDR) are often constructed using observations made by multiple Earth observing sensors over a broad range of spectra and a large scale in both time and space. These sensors can be of the same or different types operated on the same or different platforms. They can be developed and built with different technologies and are likely operated over different time spans. It has been known that the uncertainty of climate models and data records depends not only on the calibration quality (accuracy and stability) of individual sensors, but also on their calibration consistency across instruments and platforms. Therefore, sensor calibration inter-comparison and validation have become increasingly demanding and will continue to play an important role for a better understanding of the science product quality. This paper provides an overview of different methodologies, which have been successfully applied for sensor calibration inter-comparison. Specific examples using different sensors, including MODIS, AVHRR, and ETM+, are presented to illustrate the implementation of these methodologies.

  1. Multivariate Methods for Meta-Analysis of Genetic Association Studies.

    PubMed

    Dimou, Niki L; Pantavou, Katerina G; Braliou, Georgia G; Bagos, Pantelis G

    2018-01-01

    Multivariate meta-analysis of genetic association studies and genome-wide association studies has received a remarkable attention as it improves the precision of the analysis. Here, we review, summarize and present in a unified framework methods for multivariate meta-analysis of genetic association studies and genome-wide association studies. Starting with the statistical methods used for robust analysis and genetic model selection, we present in brief univariate methods for meta-analysis and we then scrutinize multivariate methodologies. Multivariate models of meta-analysis for a single gene-disease association studies, including models for haplotype association studies, multiple linked polymorphisms and multiple outcomes are discussed. The popular Mendelian randomization approach and special cases of meta-analysis addressing issues such as the assumption of the mode of inheritance, deviation from Hardy-Weinberg Equilibrium and gene-environment interactions are also presented. All available methods are enriched with practical applications and methodologies that could be developed in the future are discussed. Links for all available software implementing multivariate meta-analysis methods are also provided.

  2. VIIRS reflective solar bands on-orbit calibration and performance: a three-year update

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2014-11-01

    The on-orbit calibration of the reflective solar bands (RSBs) of VIIRS and the result from the analysis of the up-to-date 3 years of mission data are presented. The VIIRS solar diffuser (SD) and lunar calibration methodology are discussed, and the calibration coefficients, called F-factors, for the RSBs are given for the latest reincarnation. The coefficients derived from the two calibrations are compared and the uncertainties of the calibrations are discussed. Numerous improvements are made, with the major improvement to the calibration result come mainly from the improved bidirectional reflectance factor (BRF) of the SD and the vignetting functions of both the SD screen and the sun-view screen. The very clean results, devoid of many previously known noises and artifacts, assures that VIIRS has performed well for the three years on orbit since launch, and in particular that the solar diffuser stability monitor (SDSM) is functioning essentially without flaws. The SD degradation, or H-factors, for most part shows the expected decline except for the surprising rise on day 830 lasting for 75 days signaling a new degradation phenomenon. Nevertheless the SDSM and the calibration methodology have successfully captured the SD degradation for RSB calibration. The overall improvement has the most significant and direct impact on the ocean color products which demands high accuracy from RSB observations.

  3. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  4. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  5. Airport Landside - Volume III : ALSIM Calibration and Validation.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...

  6. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  7. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions from at-line laboratory to in-line industrial scale.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2018-03-01

    Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Quantitative analysis of essential oils in perfume using multivariate curve resolution combined with comprehensive two-dimensional gas chromatography.

    PubMed

    de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio

    2011-08-05

    The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Predicting trauma patient mortality: ICD [or ICD-10-AM] versus AIS based approaches.

    PubMed

    Willis, Cameron D; Gabbe, Belinda J; Jolley, Damien; Harrison, James E; Cameron, Peter A

    2010-11-01

    The International Classification of Diseases Injury Severity Score (ICISS) has been proposed as an International Classification of Diseases (ICD)-10-based alternative to mortality prediction tools that use Abbreviated Injury Scale (AIS) data, including the Trauma and Injury Severity Score (TRISS). To date, studies have not examined the performance of ICISS using Australian trauma registry data. This study aimed to compare the performance of ICISS with other mortality prediction tools in an Australian trauma registry. This was a retrospective review of prospectively collected data from the Victorian State Trauma Registry. A training dataset was created for model development and a validation dataset for evaluation. The multiplicative ICISS model was compared with a worst injury ICISS approach, Victorian TRISS (V-TRISS, using local coefficients), maximum AIS severity and a multivariable model including ICD-10-AM codes as predictors. Models were investigated for discrimination (C-statistic) and calibration (Hosmer-Lemeshow statistic). The multivariable approach had the highest level of discrimination (C-statistic 0.90) and calibration (H-L 7.65, P= 0.468). Worst injury ICISS, V-TRISS and maximum AIS had similar performance. The multiplicative ICISS produced the lowest level of discrimination (C-statistic 0.80) and poorest calibration (H-L 50.23, P < 0.001). The performance of ICISS may be affected by the data used to develop estimates, the ICD version employed, the methods for deriving estimates and the inclusion of covariates. In this analysis, a multivariable approach using ICD-10-AM codes was the best-performing method. A multivariable ICISS approach may therefore be a useful alternative to AIS-based methods and may have comparable predictive performance to locally derived TRISS models. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  10. Comparison of Portable and Bench-Top Spectrometers for Mid-Infrared Diffuse Reflectance Measurements of Soils.

    PubMed

    Hutengs, Christopher; Ludwig, Bernard; Jung, András; Eisele, Andreas; Vohland, Michael

    2018-03-27

    Mid-infrared (MIR) spectroscopy has received widespread interest as a method to complement traditional soil analysis. Recently available portable MIR spectrometers additionally offer potential for on-site applications, given sufficient spectral data quality. We therefore tested the performance of the Agilent 4300 Handheld FTIR (DRIFT spectra) in comparison to a Bruker Tensor 27 bench-top instrument in terms of (i) spectral quality and measurement noise quantified by wavelet analysis; (ii) accuracy of partial least squares (PLS) calibrations for soil organic carbon (SOC), total nitrogen (N), pH, clay and sand content with a repeated cross-validation analysis; and (iii) key spectral regions for these soil properties identified with a Monte Carlo spectral variable selection approach. Measurements and multivariate calibrations with the handheld device were as good as or slightly better than Bruker equipped with a DRIFT accessory, but not as accurate as with directional hemispherical reflectance (DHR) data collected with an integrating sphere. Variations in noise did not markedly affect the accuracy of multivariate PLS calibrations. Identified key spectral regions for PLS calibrations provided a good match between Agilent and Bruker DHR data, especially for SOC and N. Our findings suggest that portable FTIR instruments are a viable alternative for MIR measurements in the laboratory and offer great potential for on-site applications.

  11. Identification and quantification of ciprofloxacin in urine through excitation-emission fluorescence and three-way PARAFAC calibration.

    PubMed

    Ortiz, M C; Sarabia, L A; Sánchez, M S; Giménez, D

    2009-05-29

    Due to the second-order advantage, calibration models based on parallel factor analysis (PARAFAC) decomposition of three-way data are becoming important in routine analysis. This work studies the possibility of fitting PARAFAC models with excitation-emission fluorescence data for the determination of ciprofloxacin in human urine. The finally chosen PARAFAC decomposition is built with calibration samples spiked with ciprofloxacin, and with other series of urine samples that were also spiked. One of the series of samples has also another drug because the patient was taking mesalazine. The mesalazine is a fluorescent substance that interferes with the ciprofloxacin. Finally, the procedure is applied to samples of a patient who was being treated with ciprofloxacin. The trueness has been established by the regression "predicted concentration versus added concentration". The recovery factor is 88.3% for ciprofloxacin in urine, and the mean of the absolute value of the relative errors is 4.2% for 46 test samples. The multivariate sensitivity of the fit calibration model is evaluated by a regression between the loadings of PARAFAC linked to ciprofloxacin versus the true concentration in spiked samples. The multivariate capability of discrimination is near 8 microg L(-1) when the probabilities of false non-compliance and false compliance are fixed at 5%.

  12. Sustained modelling ability of artificial neural networks in the analysis of two pharmaceuticals (dextropropoxyphene and dipyrone) present in unequal concentrations.

    PubMed

    Cámara, María S; Ferroni, Félix M; De Zan, Mercedes; Goicoechea, Héctor C

    2003-07-01

    An improvement is presented on the simultaneous determination of two active ingredients present in unequal concentrations in injections. The analysis was carried out with spectrophotometric data and non-linear multivariate calibration methods, in particular artificial neural networks (ANNs). The presence of non-linearities caused by the major analyte concentrations which deviate from Beer's law was confirmed by plotting actual vs. predicted concentrations, and observing curvatures in the residuals for the estimated concentrations with linear methods. Mixtures of dextropropoxyphene and dipyrone have been analysed by using linear and non-linear partial least-squares (PLS and NPLSs) and ANNs. Notwithstanding the high degree of spectral overlap and the occurrence of non-linearities, rapid and simultaneous analysis has been achieved, with reasonably good accuracy and precision. A commercial sample was analysed by using the present methodology, and the obtained results show reasonably good agreement with those obtained by using high-performance liquid chromatography (HPLC) and a UV-spectrophotometric comparative methods.

  13. Qualitative and quantitative analysis of milk for the detection of adulteration by Laser Induced Breakdown Spectroscopy (LIBS).

    PubMed

    Moncayo, S; Manzoor, S; Rosales, J D; Anzano, J; Caceres, J O

    2017-10-01

    The present work focuses on the development of a fast and cost effective method based on Laser Induced Breakdown Spectroscopy (LIBS) to the quality control, traceability and detection of adulteration in milk. Two adulteration cases have been studied; a qualitative analysis for the discrimination between different milk blends and quantification of melamine in adulterated toddler milk powder. Principal Component Analysis (PCA) and neural networks (NN) have been used to analyze LIBS spectra obtaining a correct classification rate of 98% with a 100% of robustness. For the quantification of melamine, two methodologies have been developed; univariate analysis using CN emission band and multivariate calibration NN model obtaining correlation coefficient (R 2 ) values of 0.982 and 0.999 respectively. The results of the use of LIBS technique coupled with chemometric analysis are discussed in terms of its potential use in the food industry to perform the quality control of this dairy product. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Assessment of the quality attributes of cod caviar paste by means of front-face fluorescence spectroscopy.

    PubMed

    Airado-Rodríguez, Diego; Skaret, Josefine; Wold, Jens Petter

    2010-05-12

    This paper describes the fluorescent behavior of cod caviar paste, stored under different conditions, in terms of light exposure and concentration of oxygen in the headspace. Multivariate curve resolution was employed to decompose the overall fluorescence spectra into pure fluorescent components and calculate the relative concentrations of these components in the different samples. Profiles corresponding to protoporphyrin IX, photoprotoporphyrin, and fluorescent oxidation products were identified. Sensory evaluation, TBARS, and analysis of volatiles are typical methods employed in the routine analysis and quality control of such food. Successful calibration models were established between fluorescence and those routine methods. Correlation coefficients higher than 0.80 were found for 79% and higher than 0.90 for 50% of the assessed odors and flavors. For instance, R values of 0.94, and 0.96 were obtained for fresh and rancid flavors respectively, and 0.89 for TBARS. On the basis of these data, it can be argued that front-face fluorescence spectroscopy can substitute all of these expensive and tedious methodologies.

  15. Calibration requirements and methodology for remote sensors viewing the ocean in the visible

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1987-01-01

    The calibration requirements for ocean-viewing sensors are outlined, and the present methods of effecting such calibration are described in detail. For future instruments it is suggested that provision be made for the sensor to view solar irradiance in diffuse reflection and that the moon be used as a source of diffuse light for monitoring the sensor stability.

  16. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  18. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  19. Variety identification of brown sugar using short-wave near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Yang, Haiqing; Wu, Di; He, Yong

    2007-11-01

    Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.

  20. Analysis of Lard in Lipstick Formulation Using FTIR Spectroscopy and Multivariate Calibration: A Comparison of Three Extraction Methods.

    PubMed

    Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul

    2016-01-01

    Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.

  1. Assessing importance and satisfaction judgments of intermodal work commuters with electronic survey methodology.

    DOT National Transportation Integrated Search

    2013-09-01

    Recent advances in multivariate methodology provide an opportunity to further the assessment of service offerings in public transportation for work commuting. We offer methodologies that are alternative to direct rating scale and have advantages in t...

  2. Application of a multivariate normal distribution methodology to the dissociation of doubly ionized molecules: The DMDS (CH3 -SS-CH3 ) case.

    PubMed

    Varas, Lautaro R; Pontes, F C; Santos, A C F; Coutinho, L H; de Souza, G G B

    2015-09-15

    The ion-ion-coincidence mass spectroscopy technique brings useful information about the fragmentation dynamics of doubly and multiply charged ionic species. We advocate the use of a matrix-parameter methodology in order to represent and interpret the entire ion-ion spectra associated with the ionic dissociation of doubly charged molecules. This method makes it possible, among other things, to infer fragmentation processes and to extract information about overlapped ion-ion coincidences. This important piece of information is difficult to obtain from other previously described methodologies. A Wiley-McLaren time-of-flight mass spectrometer was used to discriminate the positively charged fragment ions resulting from the sample ionization by a pulsed 800 eV electron beam. We exemplify the application of this methodology by analyzing the fragmentation and ionic dissociation of the dimethyl disulfide (DMDS) molecule as induced by fast electrons. The doubly charged dissociation was analyzed using the Multivariate Normal Distribution. The ion-ion spectrum of the DMDS molecule was obtained at an incident electron energy of 800 eV and was matrix represented using the Multivariate Distribution theory. The proposed methodology allows us to distinguish information among [CH n SH n ] + /[CH 3 ] + (n = 1-3) fragment ions in the ion-ion coincidence spectra using ion-ion coincidence data. Using the momenta balance methodology for the inferred parameters, a secondary decay mechanism is proposed for the [CHS] + ion formation. As an additional check on the methodology, previously published data on the SiF 4 molecule was re-analyzed with the present methodology and the results were shown to be statistically equivalent. The use of a Multivariate Normal Distribution allows for the representation of the whole ion-ion mass spectrum of doubly or multiply ionized molecules as a combination of parameters and the extraction of information among overlapped data. We have successfully applied this methodology to the analysis of the fragmentation of the DMDS molecule. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  4. Use of partial least squares regression for the multivariate calibration of hazardous air pollutants in open-path FT-IR spectrometry

    NASA Astrophysics Data System (ADS)

    Hart, Brian K.; Griffiths, Peter R.

    1998-06-01

    Partial least squares (PLS) regression has been evaluated as a robust calibration technique for over 100 hazardous air pollutants (HAPs) measured by open path Fourier transform infrared (OP/FT-IR) spectrometry. PLS has the advantage over the current recommended calibration method of classical least squares (CLS), in that it can look at the whole useable spectrum (700-1300 cm-1, 2000-2150 cm-1, and 2400-3000 cm-1), and detect several analytes simultaneously. Up to one hundred HAPs synthetically added to OP/FT-IR backgrounds have been simultaneously calibrated and detected using PLS. PLS also has the advantage in requiring less preprocessing of spectra than that which is required in CLS calibration schemes, allowing PLS to provide user independent real-time analysis of OP/FT-IR spectra.

  5. The Calibration of AVHRR/3 Visible Dual Gain Using Meteosat-8 as a MODIS Calibration Transfer Medium

    NASA Technical Reports Server (NTRS)

    Avey, Lance; Garber, Donald; Nguyen, Louis; Minnis, Patrick

    2007-01-01

    This viewgraph presentation reviews the NOAA-17 AVHRR visible channels calibrated against MET-8/MODIS using dual gain regression methods. The topics include: 1) Motivation; 2) Methodology; 3) Dual Gain Regression Methods; 4) Examples of Regression methods; 5) AVHRR/3 Regression Strategy; 6) Cross-Calibration Method; 7) Spectral Response Functions; 8) MET8/NOAA-17; 9) Example of gain ratio adjustment; 10) Effect of mixed low/high count FOV; 11) Monitor dual gains over time; and 12) Conclusions

  6. Use of chemometrics to compare NIR and HPLC for the simultaneous determination of drug levels in fixed-dose combination tablets employed in tuberculosis treatment.

    PubMed

    Teixeira, Kelly Sivocy Sampaio; da Cruz Fonseca, Said Gonçalves; de Moura, Luís Carlos Brigido; de Moura, Mario Luís Ribeiro; Borges, Márcia Herminia Pinheiro; Barbosa, Euzébio Guimaraes; De Lima E Moura, Túlio Flávio Accioly

    2018-02-05

    The World Health Organization recommends that TB treatment be administered using combination therapy. The methodologies for quantifying simultaneously associated drugs are highly complex, being costly, extremely time consuming and producing chemical residues harmful to the environment. The need to seek alternative techniques that minimize these drawbacks is widely discussed in the pharmaceutical industry. Therefore, the objective of this study was to develop and validate a multivariate calibration model in association with the near infrared spectroscopy technique (NIR) for the simultaneous determination of rifampicin, isoniazid, pyrazinamide and ethambutol. These models allow the quality control of these medicines to be optimized using simple, fast, low-cost techniques that produce no chemical waste. In the NIR - PLS method, spectra readings were acquired in the 10,000-4000cm -1 range using an infrared spectrophotometer (IRPrestige - 21 - Shimadzu) with a resolution of 4cm -1 , 20 sweeps, under controlled temperature and humidity. For construction of the model, the central composite experimental design was employed on the program Statistica 13 (StatSoft Inc.). All spectra were treated by computational tools for multivariate analysis using partial least squares regression (PLS) on the software program Pirouette 3.11 (Infometrix, Inc.). Variable selections were performed by the QSAR modeling program. The models developed by NIR in association with multivariate analysis provided good prediction of the APIs for the external samples and were therefore validated. For the tablets, however, the slightly different quantitative compositions of excipients compared to the mixtures prepared for building the models led to results that were not statistically similar, despite having prediction errors considered acceptable in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  8. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  9. A methodology for obtaining on-orbit SI-traceable spectral radiance measurements in the thermal infrared

    NASA Astrophysics Data System (ADS)

    Dykema, John A.; Anderson, James G.

    2006-06-01

    A methodology to achieve spectral thermal radiance measurements from space with demonstrable on-orbit traceability to the International System of Units (SI) is described. This technique results in measurements of infrared spectral radiance R(\\tilde {\\upsilon }) , with spectral index \\tilde {\\upsilon } in cm-1, with a relative combined uncertainty u_c[R(\\tilde {\\upsilon })] of 0.0015 (k = 1) for the average mid-infrared radiance emitted by the Earth. This combined uncertainty, expressed in brightness temperature units, is equivalent to ±0.1 K at 250 K at 750 cm-1. This measurement goal is achieved by utilizing a new method for infrared scale realization combined with an instrument design optimized to minimize component uncertainties and admit tests of radiometric performance. The SI traceability of the instrument scale is established by evaluation against source-based and detector-based infrared scales in defined laboratory protocols before launch. A novel strategy is executed to ensure fidelity of on-orbit calibration to the pre-launch scale. This strategy for on-orbit validation relies on the overdetermination of instrument calibration. The pre-launch calibration against scales derived from physically independent paths to the base SI units provides the foundation for a critical analysis of the overdetermined on-orbit calibration to establish an SI-traceable estimate of the combined measurement uncertainty. Redundant calibration sources and built-in diagnostic tests to assess component measurement uncertainties verify the SI traceability of the instrument calibration over the mission lifetime. This measurement strategy can be realized by a practical instrument, a prototype Fourier-transform spectrometer under development for deployment on a small satellite. The measurement record resulting from the methodology described here meets the observational requirements for climate monitoring and climate model testing and improvement.

  10. Langley Wind Tunnel Data Quality Assurance-Check Standard Results

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.

    2000-01-01

    A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.

  11. MODIS In-flight Calibration Methodologies

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Barnes, W.

    2004-01-01

    MODIS is a key instrument for the NASA's Earth Observing System (EOS) currently operating on the Terra spacecraft launched in December 1999 and Aqua spacecraft launched in May 2002. It is a cross-track scanning radiometer, making measurements over a wide field of view in 36 spectral bands with wavelengths from 0.41 to 14.5 micrometers and providing calibrated data products for science and research communities in their studies of the Earth s system of land, oceans, and atmosphere. A complete suite of on-board calibrators (OBC) have been designed for the instruments in-flight calibration and characterization, including a solar diffuser (SD) and solar diffuser stability monitor (SDSM) system for the radiometric calibration of the 20 reflective solar bands (RSB), a blackbody (BB) for the radiometric calibration of the 16 thermal emissive bands (TEB), and a spectro-radiometric calibration assembly (SRCA) for the spatial (all bands) and spectral (RSB only) characterization. This paper discusses MODIS in-flight Cali bration methodologies of using its on-board calibrators. Challenging issues and examples of tracking and correcting instrument on-orbit response changes are presented, including SD degradation (20% at 412nm, 12% at 466nm, and 7% at 530nm over four and a half years) and response versus scan angle changes (10%, 4%, and 1% differences between beginning of the scan and end of the scan at 412nm, 466nm, and 530nm) in the VIS spectral region. Current instrument performance and lessons learned are also provided.

  12. OrbView-3 Technical Performance Evaluation 2005: Modulation Transfer Function

    NASA Technical Reports Server (NTRS)

    Cole, Aaron

    2007-01-01

    The Technical performance evaluation of OrbView-3 using the Modulation Transfer Function (MTF) is presented. The contents include: 1) MTF Results and Methodology; 2) Radiometric Calibration Methodology; and 3) Relative Radiometric Assessment Results

  13. Calibration of Resistance Factors Needed in the LRFD Design of Drilled Shafts

    DOT National Transportation Integrated Search

    2010-09-01

    The first report on Load and Resistance Factor Design (LRFD) calibration of driven piles in Louisiana (LTRC Final Report 449) was : completed in May 2009. As a continuing effort to implement the LRFD design methodology for deep foundations in Louisia...

  14. Calibration of resistance factors needed in the LRFD design of drilled shafts.

    DOT National Transportation Integrated Search

    2010-09-01

    The first report on Load and Resistance Factor Design (LRFD) calibration of driven piles in Louisiana (LTRC Final Report 449) was completed in May 2009. As a continuing effort to implement the LRFD design methodology for deep foundations in Louisiana...

  15. Calibration of resistance factors needed in the LRFD design of drilled shafts.

    DOT National Transportation Integrated Search

    2010-09-01

    The first report on Load and Resistance Factor Design (LRFD) calibration of driven piles in Louisiana (LTRC Final Report 449) was : completed in May 2009. As a continuing effort to implement the LRFD design methodology for deep foundations in Louisia...

  16. A Multivariate Methodological Workflow for the Analysis of FTIR Chemical Mapping Applied on Historic Paint Stratigraphies

    PubMed Central

    Sciutto, Giorgia; Oliveri, Paolo; Catelli, Emilio; Bonacini, Irene

    2017-01-01

    In the field of applied researches in heritage science, the use of multivariate approach is still quite limited and often chemometric results obtained are often underinterpreted. Within this scenario, the present paper is aimed at disseminating the use of suitable multivariate methodologies and proposes a procedural workflow applied on a representative group of case studies, of considerable importance for conservation purposes, as a sort of guideline on the processing and on the interpretation of this FTIR data. Initially, principal component analysis (PCA) is performed and the score values are converted into chemical maps. Successively, the brushing approach is applied, demonstrating its usefulness for a deep understanding of the relationships between the multivariate map and PC score space, as well as for the identification of the spectral bands mainly involved in the definition of each area localised within the score maps. PMID:29333162

  17. Prediction of SOFC Performance with or without Experiments: A Study on Minimum Requirements for Experimental Data

    DOE PAGES

    Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...

    2015-06-02

    In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less

  18. Single-Vector Calibration of Wind-Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2003-01-01

    An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.

  19. Calibration of Safecast dose rate measurements.

    PubMed

    Cervone, Guido; Hultquist, Carolynne

    2018-10-01

    A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    NASA Astrophysics Data System (ADS)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  1. A methodology to develop computational phantoms with adjustable posture for WBC calibration.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F

    2014-11-21

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  2. Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

    PubMed Central

    Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto

    2007-01-01

    The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.

  3. Multivariate postprocessing techniques for probabilistic hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2016-04-01

    Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power generation, Applied Energy, 96, 12-20, DOI: 10.1016/j.apenergy.2011.11.004. Schefzik, R., T. L. Thorarinsdottir, and T. Gneiting (2013), Uncertainty quantification in complex simulation models using ensemble copula coupling, Statistical Science, 28, 616-640, DOI: 10.1214/13-STS443.

  4. Infrared Stereo Calibration for Unmanned Ground Vehicle Navigation

    DTIC Science & Technology

    2014-05-01

    Josh Harguess: E -mail: joshua.harguess@navy.mil, Telephone: 1 619 553 0777 SPIE Proc. 9084: Unmanned Systems Technology XVI, Baltimore, MD, May 6-8...G., Diniz , H., Silvino, S., and de Andrade, R., “Thermal/visible au- tonomous stereo visio system calibration methodology for non-controlled

  5. TNT Prout-Tompkins Kinetics Calibration with PSUADE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Hsieh, H

    2007-04-11

    We used the code PSUADE to calibrate Prout-Tompkins kinetic parameters for pure recrystallized TNT. The calibration was based on ALE3D simulations of a series of One Dimensional Time to Explosion (ODTX) experiments. The resultant kinetic parameters differed from TNT data points with an average error of 28%, which is slightly higher than the value of 23% previously calculated using a two-point optimization. The methodology described here provides a basis for future calibration studies using PSUADE. The files used in the procedure are listed in the Appendix.

  6. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Quantification of rare earth elements using laser-induced breakdown spectroscopy

    DOE PAGES

    Martin, Madhavi; Martin, Rodger C.; Allman, Steve; ...

    2015-10-21

    In this paper, a study of the optical emission as a function of concentration of laser-ablated yttrium (Y) and of six rare earth elements, europium (Eu), gadolinium (Gd), lanthanum (La), praseodymium (Pr), neodymium (Nd), and samarium (Sm), has been evaluated using the laser-induced breakdown spectroscopy (LIBS) technique. Statistical methodology using multivariate analysis has been used to obtain the sampling errors, coefficient of regression, calibration, and cross-validation of measurements as they relate to the LIBS analysis in graphite-matrix pellets that were doped with elements at several concentrations. Each element (in oxide form) was mixed in the graphite matrix in percentages rangingmore » from 1% to 50% by weight and the LIBS spectra obtained for each composition as well as for pure oxide samples. Finally, a single pellet was mixed with all the elements in equal oxide masses to determine if we can identify the elemental peaks in a mixed pellet. This dataset is relevant for future application to studies of fission product content and distribution in irradiated nuclear fuels. These results demonstrate that LIBS technique is inherently well suited for the future challenge of in situ analysis of nuclear materials. Finally, these studies also show that LIBS spectral analysis using statistical methodology can provide quantitative results and suggest an approach in future to the far more challenging multielemental analysis of ~ 20 primary elements in high-burnup nuclear reactor fuel.« less

  8. Accounting for temporal variation in soil hydrological properties when simulating surface runoff on tilled plots

    NASA Astrophysics Data System (ADS)

    Chahinian, Nanée; Moussa, Roger; Andrieux, Patrick; Voltz, Marc

    2006-07-01

    Tillage operations are known to greatly influence local overland flow, infiltration and depressional storage by altering soil hydraulic properties and soil surface roughness. The calibration of runoff models for tilled fields is not identical to that of untilled fields, as it has to take into consideration the temporal variability of parameters due to the transient nature of surface crusts. In this paper, we seek the application of a rainfall-runoff model and the development of a calibration methodology to take into account the impact of tillage on overland flow simulation at the scale of a tilled plot (3240 m 2) located in southern France. The selected model couples the (Morel-Seytoux, H.J., 1978. Derivation of equations for variable rainfall infiltration. Water Resources Research. 14(4), 561-568). Infiltration equation to a transfer function based on the diffusive wave equation. The parameters to be calibrated are the hydraulic conductivity at natural saturation Ks, the surface detention Sd and the lag time ω. A two-step calibration procedure is presented. First, eleven rainfall-runoff events are calibrated individually and the variability of the calibrated parameters are analysed. The individually calibrated Ks values decrease monotonously according to the total amount of rainfall since tillage. No clear relationship is observed between the two parameters Sd and ω, and the date of tillage. However, the lag time ω increases inversely with the peakflow of the events. Fairly good agreement is observed between the simulated and measured hydrographs of the calibration set. Simple mathematical laws describing the evolution of Ks and ω are selected, while Sd is considered constant. The second step involves the collective calibration of the law of evolution of each parameter on the whole calibration set. This procedure is calibrated on 11 events and validated on ten runoff inducing and four non-runoff inducing rainfall events. The suggested calibration methodology seems robust and can be transposed to other gauged sites.

  9. Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.

    2009-09-01

    Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for themore » peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.« less

  10. Improving the accuracy of hyaluronic acid molecular weight estimation by conventional size exclusion chromatography.

    PubMed

    Shanmuga Doss, Sreeja; Bhatt, Nirav Pravinbhai; Jayaraman, Guhan

    2017-08-15

    There is an unreasonably high variation in the literature reports on molecular weight of hyaluronic acid (HA) estimated using conventional size exclusion chromatography (SEC). This variation is most likely due to errors in estimation. Working with commercially available HA molecular weight standards, this work examines the extent of error in molecular weight estimation due to two factors: use of non-HA based calibration and concentration of sample injected into the SEC column. We develop a multivariate regression correlation to correct for concentration effect. Our analysis showed that, SEC calibration based on non-HA standards like polyethylene oxide and pullulan led to approximately 2 and 10 times overestimation, respectively, when compared to HA-based calibration. Further, we found that injected sample concentration has an effect on molecular weight estimation. Even at 1g/l injected sample concentration, HA molecular weight standards of 0.7 and 1.64MDa showed appreciable underestimation of 11-24%. The multivariate correlation developed was found to reduce error in estimations at 1g/l to <4%. The correlation was also successfully applied to accurately estimate the molecular weight of HA produced by a recombinant Lactococcus lactis fermentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions

    PubMed Central

    Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-01-01

    ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679

  12. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  13. Traceable Co-C eutectic points for thermocouple calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jahan, F.; Ballico, M. J.

    2013-09-11

    National Measurement Institute of Australia (NMIA) has developed a miniature crucible design suitable for measurement by both thermocouples and radiation thermometry, and has established an ensemble of five Co-C eutectic-point cells based on this design. The cells in this ensemble have been individually calibrated using both ITS-90 radiation thermometry and thermocouples calibrated on the ITS-90 by the NMIA mini-coil methodology. The assigned ITS-90 temperatures obtained using these different techniques are both repeatable and consistent, despite the use of different furnaces and measurement conditions. The results demonstrate that, if individually calibrated, such cells can be practically used as part of amore » national traceability scheme for thermocouple calibration, providing a useful intermediate calibration point between Cu and Pd.« less

  14. Precipitable water vapour content from ESR/SKYNET sun-sky radiometers: validation against GNSS/GPS and AERONET over three different sites in Europe

    NASA Astrophysics Data System (ADS)

    Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano

    2018-01-01

    The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.

  15. A Study of IR Loss Correction Methodologies for Commercially Available Pyranometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Chuck; Andreas, Afshin; Augustine, John

    2017-03-24

    This presentation provides a high-level overview of a study of IR Loss Connection Methodologies for Commercially Available Pyranometers. The IR Loss Corrections Study is investigating how various correction methodologies work for several makes and models of commercially available pyranometers in common use, both when operated in ventilators with DC fans and without ventilators, as when they are typically calibrated.

  16. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  17. Calibration Designs for Non-Monolithic Wind Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Johnson, Thomas H.; Parker, Peter A.; Landman, Drew

    2010-01-01

    This research paper investigates current experimental designs and regression models for calibrating internal wind tunnel force balances of non-monolithic design. Such calibration methods are necessary for this class of balance because it has an electrical response that is dependent upon the sign of the applied forces and moments. This dependency gives rise to discontinuities in the response surfaces that are not easily modeled using traditional response surface methodologies. An analysis of current recommended calibration models is shown to lead to correlated response model terms. Alternative modeling methods are explored which feature orthogonal or near-orthogonal terms.

  18. Fourier transform infrared spectroscopy for Kona coffee authentication.

    PubMed

    Wang, Jun; Jun, Soojin; Bittenbender, H C; Gautz, Loren; Li, Qing X

    2009-06-01

    Kona coffee, the variety of "Kona typica" grown in the north and south districts of Kona-Island, carries a unique stamp of the region of Big Island of Hawaii, U.S.A. The excellent quality of Kona coffee makes it among the best coffee products in the world. Fourier transform infrared (FTIR) spectroscopy integrated with an attenuated total reflectance (ATR) accessory and multivariate analysis was used for qualitative and quantitative analysis of ground and brewed Kona coffee and blends made with Kona coffee. The calibration set of Kona coffee consisted of 10 different blends of Kona-grown original coffee mixture from 14 different farms in Hawaii and a non-Kona-grown original coffee mixture from 3 different sampling sites in Hawaii. Derivative transformations (1st and 2nd), mathematical enhancements such as mean centering and variance scaling, multivariate regressions by partial least square (PLS), and principal components regression (PCR) were implemented to develop and enhance the calibration model. The calibration model was successfully validated using 9 synthetic blend sets of 100% Kona coffee mixture and its adulterant, 100% non-Kona coffee mixture. There were distinct peak variations of ground and brewed coffee blends in the spectral "fingerprint" region between 800 and 1900 cm(-1). The PLS-2nd derivative calibration model based on brewed Kona coffee with mean centering data processing showed the highest degree of accuracy with the lowest standard error of calibration value of 0.81 and the highest R(2) value of 0.999. The model was further validated by quantitative analysis of commercial Kona coffee blends. Results demonstrate that FTIR can be a rapid alternative to authenticate Kona coffee, which only needs very quick and simple sample preparations.

  19. PacRIM II: A review of AirSAR operations and system performance

    NASA Technical Reports Server (NTRS)

    Moller, D.; Chu, A.; Lou, Y.; Miller, T.; O'Leary, E.

    2001-01-01

    In this paper we briefly review the AirSAR system, its expected performance, and quality of data obtained during that mission. We discuss the system hardware calibration methodologies, and present quantitative performance values of radar backscatter and interferometric height errors (random and systematic) from PACRIM II calibration data.

  20. Chemical Contaminant and Decontaminant Test Methodology Source Document. Second Edition

    DTIC Science & Technology

    2012-07-01

    performance as described in “A Statistical Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography...Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography/Mass Spectrometry Technique. Mass... APPLICATIONS INTERNATIONAL CORPORATION Gunpowder, MD 21010-0068 July 2012 Approved for public release; distribution is unlimited

  1. Impact and Estimation of Balance Coordinate System Rotations and Translations in Wind-Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Toro, Kenneth G.; Parker, Peter A.

    2017-01-01

    Discrepancies between the model and balance coordinate systems lead to biases in the aerodynamic measurements during wind-tunnel testing. The reference coordinate system relative to the calibration coordinate system at which the forces and moments are resolved is crucial to the overall accuracy of force measurements. This paper discusses sources of discrepancies and estimates of coordinate system rotation and translation due to machining and assembly differences. A methodology for numerically estimating the coordinate system biases will be discussed and developed. Two case studies are presented using this methodology to estimate the model alignment. Examples span from angle measurement system shifts on the calibration system to discrepancies in actual wind-tunnel data. The results from these case-studies will help aerodynamic researchers and force balance engineers to better the understand and identify potential differences in calibration systems due to coordinate system rotation and translation.

  2. A review of calibrated blood oxygenation level-dependent (BOLD) methods for the measurement of task-induced changes in brain oxygen metabolism

    PubMed Central

    Blockley, Nicholas P.; Griffeth, Valerie E. M.; Simon, Aaron B.; Buxton, Richard B.

    2013-01-01

    The dynamics of the blood oxygenation level-dependent (BOLD) response are dependent on changes in cerebral blood flow, cerebral blood volume and the cerebral metabolic rate of oxygen consumption. Furthermore, the amplitude of the response is dependent on the baseline physiological state, defined by the haematocrit, oxygen extraction fraction and cerebral blood volume. As a result of this complex dependence, the accurate interpretation of BOLD data and robust intersubject comparisons when the baseline physiology is varied are difficult. The calibrated BOLD technique was developed to address these issues. However, the methodology is complex and its full promise has not yet been realised. In this review, the theoretical underpinnings of calibrated BOLD, and issues regarding this theory that are still to be resolved, are discussed. Important aspects of practical implementation are reviewed and reported applications of this methodology are presented. PMID:22945365

  3. Generalized Subset Designs in Analytical Chemistry.

    PubMed

    Surowiec, Izabella; Vikström, Ludvig; Hector, Gustaf; Johansson, Erik; Vikström, Conny; Trygg, Johan

    2017-06-20

    Design of experiments (DOE) is an established methodology in research, development, manufacturing, and production for screening, optimization, and robustness testing. Two-level fractional factorial designs remain the preferred approach due to high information content while keeping the number of experiments low. These types of designs, however, have never been extended to a generalized multilevel reduced design type that would be capable to include both qualitative and quantitative factors. In this Article we describe a novel generalized fractional factorial design. In addition, it also provides complementary and balanced subdesigns analogous to a fold-over in two-level reduced factorial designs. We demonstrate how this design type can be applied with good results in three different applications in analytical chemistry including (a) multivariate calibration using microwave resonance spectroscopy for the determination of water in tablets, (b) stability study in drug product development, and (c) representative sample selection in clinical studies. This demonstrates the potential of generalized fractional factorial designs to be applied in many other areas of analytical chemistry where representative, balanced, and complementary subsets are required, especially when a combination of quantitative and qualitative factors at multiple levels exists.

  4. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    DTIC Science & Technology

    2017-07-01

    principled methodology for two-sample graph testing; designed a provably almost-surely perfect vertex clustering algorithm for block model graphs; proved...3.7 Semi-Supervised Clustering Methodology ...................................................................... 9 3.8 Robust Hypothesis Testing...dimensional Euclidean space – allows the full arsenal of statistical and machine learning methodology for multivariate Euclidean data to be deployed for

  5. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  6. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  7. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  8. Quantitative monitoring of sucrose, reducing sugar and total sugar dynamics for phenotyping of water-deficit stress tolerance in rice through spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Das, Bappa; Sahoo, Rabi N.; Pargal, Sourabh; Krishna, Gopal; Verma, Rakesh; Chinnusamy, Viswanathan; Sehgal, Vinay K.; Gupta, Vinod K.; Dash, Sushanta K.; Swain, Padmini

    2018-03-01

    In the present investigation, the changes in sucrose, reducing and total sugar content due to water-deficit stress in rice leaves were modeled using visible, near infrared (VNIR) and shortwave infrared (SWIR) spectroscopy. The objectives of the study were to identify the best vegetation indices and suitable multivariate technique based on precise analysis of hyperspectral data (350 to 2500 nm) and sucrose, reducing sugar and total sugar content measured at different stress levels from 16 different rice genotypes. Spectral data analysis was done to identify suitable spectral indices and models for sucrose estimation. Novel spectral indices in near infrared (NIR) range viz. ratio spectral index (RSI) and normalised difference spectral indices (NDSI) sensitive to sucrose, reducing sugar and total sugar content were identified which were subsequently calibrated and validated. The RSI and NDSI models had R2 values of 0.65, 0.71 and 0.67; RPD values of 1.68, 1.95 and 1.66 for sucrose, reducing sugar and total sugar, respectively for validation dataset. Different multivariate spectral models such as artificial neural network (ANN), multivariate adaptive regression splines (MARS), multiple linear regression (MLR), partial least square regression (PLSR), random forest regression (RFR) and support vector machine regression (SVMR) were also evaluated. The best performing multivariate models for sucrose, reducing sugars and total sugars were found to be, MARS, ANN and MARS, respectively with respect to RPD values of 2.08, 2.44, and 1.93. Results indicated that VNIR and SWIR spectroscopy combined with multivariate calibration can be used as a reliable alternative to conventional methods for measurement of sucrose, reducing sugars and total sugars of rice under water-deficit stress as this technique is fast, economic, and noninvasive.

  9. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  10. Domain-Invariant Partial-Least-Squares Regression.

    PubMed

    Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne

    2018-05-11

    Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.

  11. Radiometric calibration and performance trends of the Clouds and Earth's Radiant Energy System (CERES) instrument sensors onboard the Terra and Aqua spacecraft

    NASA Astrophysics Data System (ADS)

    Shankar, Mohan; Priestley, Kory; Smith, Nathaniel; Smith, Nitchie; Thomas, Susan; Walikainen, Dale

    2015-10-01

    The Clouds and Earth's Radiant Energy System (CERES) instruments help to study the impact of clouds on the earth's radiation budget. There are currently five instruments- two each on board Aqua and Terra spacecraft and one on the Suomi NPP spacecraft to measure the earth's reflected shortwave and emitted longwave energy, which represent two components of the earth's radiation energy budget. Flight Models (FM) 1 and 2 are on Terra, FM 3 and 4 are on Aqua, and FM5 is on Suomi NPP. The measurements are made by three sensors on each instrument: a shortwave sensor that measures the 0.3-5 microns wavelength band, a window sensor that measures the water vapor window between 8-12 microns, and a total sensor that measures all incident energy (0.3- >100 microns). The required accuracy of CERES measurements of 0.5% in the longwave and 1% in the shortwave is achieved through an extensive pre-launch ground calibration campaign as well as on-orbit calibration and validation activities. Onorbit calibration is carried out using the Internal Calibration Module (ICM) that consists of a tungsten lamp, blackbodies, and a solar diffuser known as the Mirror Attenuator Mosaic (MAM). The ICM calibration provides information about the stability of the sensors' broadband radiometric gains on-orbit. Several validation studies are conducted in order to monitor the behavior of the instruments in various spectral bands. The CERES Edition-4 data products for the FM1-FM4 instruments incorporate the latest calibration methodologies to improve on the Edition-3 data products. In this paper, we discuss the updated calibration methodology and present some validation studies to demonstrate the improvement in the trends using the CERES Edition-4 data products for all four instruments.

  12. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  13. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  14. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  15. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturersmore » are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.« less

  16. New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-01

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.

  17. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  19. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  20. Comparison of univariate and multivariate calibration for the determination of micronutrients in pellets of plant materials by laser induced breakdown spectrometry

    NASA Astrophysics Data System (ADS)

    Braga, Jez Willian Batista; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Rufini, Iolanda Aparecida; Santos, Dário, Jr.; Krug, Francisco José

    2010-01-01

    The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.

  1. Item Response Modeling of Multivariate Count Data with Zero Inflation, Maximum Inflation, and Heaping

    ERIC Educational Resources Information Center

    Magnus, Brooke E.; Thissen, David

    2017-01-01

    Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…

  2. A Multivariate Descriptive Model of Motivation for Orthodontic Treatment.

    ERIC Educational Resources Information Center

    Hackett, Paul M. W.; And Others

    1993-01-01

    Motivation for receiving orthodontic treatment was studied among 109 young adults, and a multivariate model of the process is proposed. The combination of smallest scale analysis and Partial Order Scalogram Analysis by base Coordinates (POSAC) illustrates an interesting methodology for health treatment studies and explores motivation for dental…

  3. A Multivariate Solution of the Multivariate Ranking and Selection Problem

    DTIC Science & Technology

    1980-02-01

    Taneja (1972)), a ’a for a vector of constants c (Krishnaiah and Rizvi (1966)), the generalized variance ( Gnanadesikan and Gupta (1970)), iegier (1976...Olk-in, I. and Sobel, M. (1977). Selecting and Ordering Populations: A New Statistical Methodology, John Wiley & Sons, Inc., New York. Gnanadesikan

  4. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  5. Transient Inverse Calibration of Hanford Site-Wide Groundwater Model to Hanford Operational Impacts - 1943 to 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.

    2001-05-31

    This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.

  6. IN-SITU IONIC CHEMICAL ANALYSIS OF FRESH WATER VIA A NOVEL COMBINED MULTI-SENSOR / SIGNAL PROCESSING ARCHITECTURE

    NASA Astrophysics Data System (ADS)

    Mueller, A. V.; Hemond, H.

    2009-12-01

    The capability for comprehensive, real-time, in-situ characterization of the chemical constituents of natural waters is a powerful tool for the advancement of the ecological and geochemical sciences, e.g. by facilitating rapid high-resolution adaptive sampling campaigns and avoiding the potential errors and high costs related to traditional grab sample collection, transportation and analysis. Portable field-ready instrumentation also promotes the goals of large-scale monitoring networks, such as CUASHI and WATERS, without the financial and human resources overhead required for traditional sampling at this scale. Problems of environmental remediation and monitoring of industrial waste waters would additionally benefit from such instrumental capacity. In-situ measurement of all major ions contributing to the charge makeup of natural fresh water is thus pursued via a combined multi-sensor/multivariate signal processing architecture. The instrument is based primarily on commercial electrochemical sensors, e.g. ion selective electrodes (ISEs) and ion selective field-effect transistors (ISFETs), to promote low cost as well as easy maintenance and reproduction,. The system employs a novel architecture of multivariate signal processing to extract accurate information from in-situ data streams via an "unmixing" process that accounts for sensor non-linearities at low concentrations, as well as sensor cross-reactivities. Conductivity, charge neutrality and temperature are applied as additional mathematical constraints on the chemical state of the system. Including such non-ionic information assists in obtaining accurate and useful calibrations even in the non-linear portion of the sensor response curves, and measurements can be made without the traditionally-required standard additions or ionic strength adjustment. Initial work demonstrates the effectiveness of this methodology at predicting inorganic cations (Na+, NH4+, H+, Ca2+, and K+) in a simplified system containing only a single anion (Cl-) in addition to hydroxide, thus allowing charge neutrality to be easily and explicitly invoked. Calibration of every probe relative to each of the five cations present is undertaken, and resulting curves are used to create a representative environmental data set based on USGS data for New England waters. Signal processing methodologies, specifically artificial neural networks (ANNs), are extended to use a feedback architecture based on conductivity measurements and charge neutrality calculations. The algorithms are then tuned to optimize performance of the algorithm at predicting actual concentrations from these simulated signals. Results are compared to use of component probes as stand-alone sensors. Future extension of this instrument for multiple anions (including carbonate and bicarbonate, nitrate, and sulfate) will ultimately provide rapid, accurate field measurements of the entire charge balance of natural waters at high resolution, improving sampling abilities while reducing costs and errors related to transport and analysis of grab samples.

  7. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  8. A Kinematic Calibration Process for Flight Robotic Arms

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  9. Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.

    ERIC Educational Resources Information Center

    Raymond, Margaret; And Others

    1983-01-01

    Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…

  10. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  11. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    PubMed

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Absolute photometric calibration of IRAC: lessons learned using nine years of flight data

    NASA Astrophysics Data System (ADS)

    Carey, S.; Ingalls, J.; Hora, J.; Surace, J.; Glaccum, W.; Lowrance, P.; Krick, J.; Cole, D.; Laine, S.; Engelke, C.; Price, S.; Bohlin, R.; Gordon, K.

    2012-09-01

    Significant improvements in our understanding of various photometric effects have occurred in the more than nine years of flight operations of the Infrared Array Camera aboard the Spitzer Space Telescope. With the accumulation of calibration data, photometric variations that are intrinsic to the instrument can now be mapped with high fidelity. Using all existing data on calibration stars, the array location-dependent photometric correction (the variation of flux with position on the array) and the correction for intra-pixel sensitivity variation (pixel-phase) have been modeled simultaneously. Examination of the warm mission data enabled the characterization of the underlying form of the pixelphase variation in cryogenic data. In addition to the accumulation of calibration data, significant improvements in the calibration of the truth spectra of the calibrators has taken place. Using the work of Engelke et al. (2006), the KIII calibrators have no offset as compared to the AV calibrators, providing a second pillar of the calibration scheme. The current cryogenic calibration is better than 3% in an absolute sense, with most of the uncertainty still in the knowledge of the true flux densities of the primary calibrators. We present the final state of the cryogenic IRAC calibration and a comparison of the IRAC calibration to an independent calibration methodology using the HST primary calibrators.

  13. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  14. PepsNMR for 1H NMR metabolomic data pre-processing.

    PubMed

    Martin, Manon; Legat, Benoît; Leenders, Justine; Vanwinsberghe, Julien; Rousseau, Réjane; Boulanger, Bruno; Eilers, Paul H C; De Tullio, Pascal; Govaerts, Bernadette

    2018-08-17

    In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1 H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Comparison of Continuous Wave CO2 Doppler Lidar Calibration Using Earth Surface Targets in Laboratory and Airborne Measurements

    NASA Technical Reports Server (NTRS)

    Jarzembski, Maurice A.; Srivastava, Vandana

    1999-01-01

    Routine backscatter, beta, measurements by an airborne or space-based lidar from designated earth surfaces with known and fairly uniform beta properties can potentially offer lidar calibration opportunities. This can in turn be used to obtain accurate atmospheric aerosol and cloud beta measurements on large spatial scales. This is important because achieving a precise calibration factor for large pulsed lidars then need not rest solely on using a standard hard target procedure. Furthermore, calibration from designated earth surfaces would provide an inflight performance evaluation of the lidar. Hence, with active remote sensing using lasers with high resolution data, calibration of a space-based lidar using earth's surfaces will be extremely useful. The calibration methodology using the earth's surface initially requires measuring beta of various earth surfaces simulated in the laboratory using a focused continuous wave (CW) CO2 Doppler lidar and then use these beta measurements as standards for the earth surface signal from airborne or space-based lidars. Since beta from the earth's surface may be retrieved at different angles of incidence, beta would also need to be measured at various angles of incidences of the different surfaces. In general, Earth-surface reflectance measurements have been made in the infrared, but the use of lidars to characterize them and in turn use of the Earth's surface to calibrate lidars has not been made. The feasibility of this calibration methodology is demonstrated through a comparison of these laboratory measurements with actual earth surface beta retrieved from the same lidar during the NASA/Multi-center Airborne Coherent Atmospheric Wind Sensor (MACAWS) mission on NASA's DC8 aircraft from 13 - 26 September, 1995. For the selected earth surface from the airborne lidar data, an average beta for the surface was established and the statistics of lidar efficiency was determined. This was compared with the actual lidar efficiency determined with the standard calibrating hard target.

  16. Multicomponent kinetic spectrophotometric determination of pefloxacin and norfloxacin in pharmaceutical preparations and human plasma samples with the aid of chemometrics

    NASA Astrophysics Data System (ADS)

    Ni, Yongnian; Wang, Yong; Kokot, Serge

    2008-10-01

    A spectrophotometric method for the simultaneous determination of the important pharmaceuticals, pefloxacin and its structurally similar metabolite, norfloxacin, is described for the first time. The analysis is based on the monitoring of a kinetic spectrophotometric reaction of the two analytes with potassium permanganate as the oxidant. The measurement of the reaction process followed the absorbance decrease of potassium permanganate at 526 nm, and the accompanying increase of the product, potassium manganate, at 608 nm. It was essential to use multivariate calibrations to overcome severe spectral overlaps and similarities in reaction kinetics. Calibration curves for the individual analytes showed linear relationships over the concentration ranges of 1.0-11.5 mg L -1 at 526 and 608 nm for pefloxacin, and 0.15-1.8 mg L -1 at 526 and 608 nm for norfloxacin. Various multivariate calibration models were applied, at the two analytical wavelengths, for the simultaneous prediction of the two analytes including classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), radial basis function-artificial neural network (RBF-ANN) and principal component-radial basis function-artificial neural network (PC-RBF-ANN). PLS and PC-RBF-ANN calibrations with the data collected at 526 nm, were the preferred methods—%RPE T ˜ 5, and LODs for pefloxacin and norfloxacin of 0.36 and 0.06 mg L -1, respectively. Then, the proposed method was applied successfully for the simultaneous determination of pefloxacin and norfloxacin present in pharmaceutical and human plasma samples. The results compared well with those from the alternative analysis by HPLC.

  17. Assessing the Effect of an Old and New Methodology for Scale Conversion on Examinee Scores

    ERIC Educational Resources Information Center

    Rizavi, Saba; Smith, Robert; Carey, Jill

    2002-01-01

    Research has been done to look at the benefits of BILOG over LOGIST as well as the potential issues that can arise if transition from LOGIST to BILOG is desired. A serious concern arises when comparability is required between previously calibrated LOGIST parameter estimates and currently calibrated BILOG estimates. It is imperative to obtain an…

  18. Real-time determination of critical quality attributes using near-infrared spectroscopy: a contribution for Process Analytical Technology (PAT).

    PubMed

    Rosas, Juan G; Blanco, Marcel; González, Josep M; Alcalà, Manel

    2012-08-15

    Process Analytical Technology (PAT) is playing a central role in current regulations on pharmaceutical production processes. Proper understanding of all operations and variables connecting the raw materials to end products is one of the keys to ensuring quality of the products and continuous improvement in their production. Near infrared spectroscopy (NIRS) has been successfully used to develop faster and non-invasive quantitative methods for real-time predicting critical quality attributes (CQA) of pharmaceutical granulates (API content, pH, moisture, flowability, angle of repose and particle size). NIR spectra have been acquired from the bin blender after granulation process in a non-classified area without the need of sample withdrawal. The methodology used for data acquisition, calibration modelling and method application in this context is relatively inexpensive and can be easily implemented by most pharmaceutical laboratories. For this purpose, Partial Least-Squares (PLS) algorithm was used to calculate multivariate calibration models, that provided acceptable Root Mean Square Error of Predictions (RMSEP) values (RMSEP(API)=1.0 mg/g; RMSEP(pH)=0.1; RMSEP(Moisture)=0.1%; RMSEP(Flowability)=0.6 g/s; RMSEP(Angle of repose)=1.7° and RMSEP(Particle size)=2.5%) that allowed the application for routine analyses of production batches. The proposed method affords quality assessment of end products and the determination of important parameters with a view to understanding production processes used by the pharmaceutical industry. As shown here, the NIRS technique is a highly suitable tool for Process Analytical Technologies. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. The critical role of NIR spectroscopy and statistical process control (SPC) strategy towards captopril tablets (25 mg) manufacturing process understanding: a case study.

    PubMed

    Curtivo, Cátia Panizzon Dal; Funghi, Nathália Bitencourt; Tavares, Guilherme Diniz; Barbosa, Sávio Fujita; Löbenberg, Raimar; Bou-Chacra, Nádia Araci

    2015-05-01

    In this work, near-infrared spectroscopy (NIRS) method was used to evaluate the uniformity of dosage units of three captopril 25 mg tablets commercial batches. The performance of the calibration method was assessed by determination of Q value (0.9986), standard error of estimation (C-set SEE = 1.956), standard error of prediction (V-set SEP = 2.076) as well as the consistency (106.1%). These results indicated the adequacy of the selected model. The method validation revealed the agreement of the reference high pressure liquid chromatography (HPLC) and NIRS methods. The process evaluation using the NIRS method showed that the variability was due to common causes and delivered predictable results consistently. Cp and Cpk values were, respectively, 2.05 and 1.80. These results revealed a non-centered process in relation to the average target (100% w/w), in the specified range (85-115%). The probability of failure was 21:100 million tablets of captopril. The NIRS in combination with the method of multivariate calibration, partial least squares (PLS) regression, allowed the development of methodology for the uniformity of dosage units evaluation of captopril tablets 25 mg. The statistical process control strategy associated with NIRS method as PAT played a critical role in understanding of the sources and degree of variation and its impact on the process. This approach led towards a better process understanding and provided the sound scientific basis for its continuous improvement.

  20. The validation of the Z-Scan technique for the determination of plasma glucose

    NASA Astrophysics Data System (ADS)

    Alves, Sarah I.; Silva, Elaine A. O.; Costa, Simone S.; Sonego, Denise R. N.; Hallack, Maira L.; Coppini, Ornela L.; Rowies, Fernanda; Azzalis, Ligia A.; Junqueira, Virginia B. C.; Pereira, Edimar C.; Rocha, Katya C.; Fonseca, Fernando L. A.

    2013-11-01

    Glucose is the main energy source for the human body. The concentration of blood glucose is regulated by several hormones including both antagonists: insulin and glucagon. The quantification of glucose in the blood is used for diagnosing metabolic disorders of carbohydrates, such as diabetes, idiopathic hypoglycemia and pancreatic diseases. Currently, the methodology used for this determination is the enzymatic colorimetric with spectrophotometric. This study aimed to validate the use of measurements of nonlinear optical properties of plasma glucose via the Z-Scan technique. For this we used samples of calibrator patterns that simulate commercial samples of patients (ELITech ©). Besides calibrators, serum glucose levels within acceptable reference values (normal control serum - Brazilian Society of Clinical Pathology and Laboratory Medicine) and also overestimated (pathological control serum - Brazilian Society of Clinical Pathology and Laboratory Medicine) were used in the methodology proposal. Calibrator dilutions were performed and determined by the Z-Scan technique for the preparation of calibration curve. In conclusion, Z-Scan method can be used to determinate glucose levels in biological samples with enzymatic colorimetric reaction and also to apply the same quality control parameters used in biochemistry clinical.

  1. Multivariable control of a twin lift helicopter system using the LQG/LTR design methodology

    NASA Technical Reports Server (NTRS)

    Rodriguez, A. A.; Athans, M.

    1986-01-01

    Guidelines for developing a multivariable centralized automatic flight control system (AFCS) for a twin lift helicopter system (TLHS) are presented. Singular value ideas are used to formulate performance and stability robustness specifications. A linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) design is obtained and evaluated.

  2. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  3. An efficient multistage algorithm for full calibration of the hemodynamic model from BOLD signal responses.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2017-11-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model. The proposed method is used to estimate consecutively the values of the two sets of model parameters. Numerical results corresponding to both synthetic and real functional magnetic resonance imaging measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data

    NASA Astrophysics Data System (ADS)

    Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan

    2010-05-01

    There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.

  5. Nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Athans, Michael

    1989-01-01

    The primary thrust of the research was to conduct fundamental research in the theories and methodologies for designing complex high-performance multivariable feedback control systems; and to conduct feasibiltiy studies in application areas of interest to NASA sponsors that point out advantages and shortcomings of available control system design methodologies.

  6. Construction of Gridded Daily Weather Data and its Use in Central-European Agroclimatic Study

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Trnka, M.; Skalak, P.

    2013-12-01

    The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series, interpolated, and then modified according to the GCM- or RCM-based climate change scenarios. The present contribution, in which the parametric daily weather generator M&Rfi is linked to the high-resolution RCM output (ALADIN-Climate/CZ model) and GCM-based climate change scenarios, consists of two parts: The first part focuses on a methodology. Firstly, the gridded WG representing the baseline climate is created by merging information from observations and high resolution RCM outputs. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with RCM-simulated weather series vs. spatially scarcer observations. To represent the future climate, the WG parameters are modified according to the 'WG-friendly' climate change scenarios. These scenarios are defined in terms of changes in WG parameters and include - apart from changes in the means - changes in WG parameters, which represent the additional characteristics of the weather series (e.g. probability of wet day occurrence and lag-1 autocorrelation of daily mean temperature). The WG-friendly scenarios for the present experiment are based on comparison of future vs baseline surface weather series simulated by GCMs from a CMIP3 database. The second part will present results of climate change impact study based on an above methodology applied to Central Europe. The changes in selected climatic (focusing on the extreme precipitation and temperature characteristics) and agroclimatic (including number of days during vegetation season with heat and drought stresses) characteristics will be analysed. In discussing the results, the emphasis will be put on 'added value' of various aspects of above methodology (e.g. inclusion of changes in 'advanced' WG parameters into the climate change scenarios). Acknowledgements: The present experiment is made within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), and VALUE (COST ES 1102 action).

  7. Spectroradiometric calibration of the thematic mapper and multispectral scanner system

    NASA Technical Reports Server (NTRS)

    Slater, Philip N.; Palmer, James M.

    1986-01-01

    A list of personnel who have contributed to the program is provided. Sixteen publications and presentations are also listed. A preprint summarizing five in-flight absolute radiometric calibrations of the solar reflective bands of the LANDSAT-5 Thematic Mapper is presented. The 23 band calibrations made on the five dates show a 2.5% RMS variation from the mean as a percentage of the mean. A preprint is also presented that discusses the reflectance-based results of the above preprint. It proceeds to analyze and present results of a second, independent calibration method based on radiance measurements from a helicopter. Radiative transfer through the atmosphere, model atmospheres, the calibration methodology used at White Sands and the results of a sensitivity analysis of the reflectance-based approach is also discussed.

  8. Development of a semi-automated model identification and calibration tool for conceptual modelling of sewer systems.

    PubMed

    Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick

    2013-01-01

    Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.

  9. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  10. Energy Performance Assessment of Radiant Cooling System through Modeling and Calibration at Component Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Yasin; Mathur, Jyotirmay; Bhandari, Mahabir S

    2016-01-01

    The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans,more » etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.« less

  11. Analysis and calibration of Safecasta data relative to the 2011 Fukushima Daiichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Hultquist, C.

    2017-12-01

    Citizen-led movements producing scientific hazard data during disasters are increasingly common. After the Japanese earthquake-triggered tsunami in 2011, and the resulting radioactive releases at the damaged Fukushima Daiichi nuclear power plants, citizens monitored on-ground levels of radiation with innovative mobile devices built from off-the-shelf components. To date, the citizen-led Safecast project has recorded 50 million radiation measurements world- wide, with the majority of these measurements from Japan. A robust methodology is presented to calibrate contributed Safecast radiation measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using official observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the official and contributed datasets at specific time windows and at corresponding spatial locations. The coefficients found are aggregated and interpolated using cubic and linear methods to generate time dependent calibration function. Normal background radiation, decay rates and missing values are taken into account during the analysis. Results show that the official Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing ratio with time. The new time dependent calibration function takes into account the presence of different Cesium isotopes, and minimizes the error between official and contributed data. This time dependent Safecast calibration function is necessary until 2030, after which date the error caused by the isotopes ratio will become negligible.

  12. [Fundamental aspects for accrediting medical equipment calibration laboratories in Colombia].

    PubMed

    Llamosa-Rincón, Luis E; López-Isaza, Giovanni A; Villarreal-Castro, Milton F

    2010-02-01

    Analysing the fundamental methodological aspects which should be considered when drawing up calibration procedure for electro-medical equipment, thereby permitting international standard-based accreditation of electro-medical metrology laboratories in Colombia. NTC-ISO-IEC 17025:2005 and GTC-51-based procedures for calibrating electro-medical equipment were implemented and then used as patterns. The mathematical model for determining the estimated uncertainty value when calibrating electro-medical equipment for accreditation by the Electrical Variable Metrology Laboratory's Electro-medical Equipment Calibration Area accredited in compliance with Superintendence of Industry and Commerce Resolution 25771 May 26th 2009 consists of two equations depending on the case; they are: E = (Ai + sigmaAi) - (Ar + sigmaAr + deltaAr1) and E = (Ai + sigmaAi) - (Ar + sigmaA + deltaAr1). The mathematical modelling implemented for measuring uncertainty in the Universidad Tecnológica de Pereira's Electrical Variable Metrology Laboratory (Electro-medical Equipment Calibration Area) will become a good guide for calibration initiated in other laboratories in Colombia and Latin-America.

  13. Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms

    ERIC Educational Resources Information Center

    Anderson, John R.

    2012-01-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…

  14. Improved modeling of in vivo confocal Raman data using multivariate curve resolution (MCR) augmentation of ordinary least squares models.

    PubMed

    Hancewicz, Thomas M; Xiao, Chunhong; Zhang, Shuliang; Misra, Manoj

    2013-12-01

    In vivo confocal Raman spectroscopy has become the measurement technique of choice for skin health and skin care related communities as a way of measuring functional chemistry aspects of skin that are key indicators for care and treatment of various skin conditions. Chief among these techniques are stratum corneum water content, a critical health indicator for severe skin condition related to dryness, and natural moisturizing factor components that are associated with skin protection and barrier health. In addition, in vivo Raman spectroscopy has proven to be a rapid and effective method for quantifying component penetration in skin for topically applied skin care formulations. The benefit of such a capability is that noninvasive analytical chemistry can be performed in vivo in a clinical setting, significantly simplifying studies aimed at evaluating product performance. This presumes, however, that the data and analysis methods used are compatible and appropriate for the intended purpose. The standard analysis method used by most researchers for in vivo Raman data is ordinary least squares (OLS) regression. The focus of work described in this paper is the applicability of OLS for in vivo Raman analysis with particular attention given to use for non-ideal data that often violate the inherent limitations and deficiencies associated with proper application of OLS. We then describe a newly developed in vivo Raman spectroscopic analysis methodology called multivariate curve resolution-augmented ordinary least squares (MCR-OLS), a relatively simple route to addressing many of the issues with OLS. The method is compared with the standard OLS method using the same in vivo Raman data set and using both qualitative and quantitative comparisons based on model fit error, adherence to known data constraints, and performance against calibration samples. A clear improvement is shown in each comparison for MCR-OLS over standard OLS, thus supporting the premise that the MCR-OLS method is better suited for general-purpose multicomponent analysis of in vivo Raman spectral data. This suggests that the methodology is more readily adaptable to a wide range of component systems and is thus more generally applicable than standard OLS.

  15. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    NASA Astrophysics Data System (ADS)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  16. SCALA: In situ calibration for integral field spectrographs

    NASA Astrophysics Data System (ADS)

    Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory

    2017-11-01

    Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.

  17. Bayesian Local Contamination Models for Multivariate Outliers

    PubMed Central

    Page, Garritt L.; Dunson, David B.

    2013-01-01

    In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465

  18. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Enhanced ID Pit Sizing Using Multivariate Regression Algorithm

    NASA Astrophysics Data System (ADS)

    Krzywosz, Kenji

    2007-03-01

    EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.

  20. Determination of boiling point of petrochemicals by gas chromatography-mass spectrometry and multivariate regression analysis of structural activity relationship.

    PubMed

    Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A

    2014-08-01

    Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. The Wally plot approach to assess the calibration of clinical prediction models.

    PubMed

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  2. Calibration of an electronic nose for poultry farm

    NASA Astrophysics Data System (ADS)

    Abdullah, A. H.; Shukor, S. A.; Kamis, M. S.; Shakaff, A. Y. M.; Zakaria, A.; Rahim, N. A.; Mamduh, S. M.; Kamarudin, K.; Saad, F. S. A.; Masnan, M. J.; Mustafa, H.

    2017-03-01

    Malodour from the poultry farms could cause air pollution and therefore potentially dangerous to humans' and animals' health. This issue also poses sustainability risk to the poultry industries due to objections from local community. The aim of this paper is to develop and calibrate a cost effective and efficient electronic nose for poultry farm air monitoring. The instrument main components include sensor chamber, array of specific sensors, microcontroller, signal conditioning circuits and wireless sensor networks. The instrument was calibrated to allow classification of different concentrations of main volatile compounds in the poultry farm malodour. The outcome of the process will also confirm the device's reliability prior to being used for poultry farm malodour assessment. The Multivariate Analysis (HCA and KNN) and Artificial Neural Network (ANN) pattern recognition technique was used to process the acquired data. The results show that the instrument is able to calibrate the samples using ANN classification model with high accuracy. The finding verifies the instrument's performance to be used as an effective poultry farm malodour monitoring.

  3. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.

  4. Determination of glucose in a biological matrix by multivariate analysis of multiple band-pass-filtered Fourier transform near-infrared interferograms.

    PubMed

    Mattu, M J; Small, G W; Arnold, M A

    1997-11-15

    A multivariate calibration method is described in which Fourier transform near-infrared interferogram data are used to determine clinically relevant levels of glucose in an aqueous matrix of bovine serum albumin (BSA) and triacetin. BSA and triacetin are used to model the protein and triglycerides in blood, respectively, and are present in levels spanning the normal human physiological range. A full factorial experimental design is constructed for the data collection, with glucose at 10 levels, BSA at 4 levels, and triacetin at 4 levels. Gaussian-shaped band-pass digital filters are applied to the interferogram data to extract frequencies associated with an absorption band of interest. Separate filters of various widths are positioned on the glucose band at 4400 cm-1, the BSA band at 4606 cm-1, and the triacetin band at 4446 cm-1. Each filter is applied to the raw interferogram, producing one, two, or three filtered interferograms, depending on the number of filters used. Segments of these filtered interferograms are used together in a partial least-squares regression analysis to build glucose calibration models. The optimal calibration model is realized by use of separate segments of interferograms filtered with three filters centered on the glucose, BSA, and triacetin bands. Over the physiological range of 1-20 mM glucose, this 17-term model exhibits values of R2, standard error of calibration, and standard error of prediction of 98.85%, 0.631 mM, and 0.677 mM, respectively. These results are comparable to those obtained in a conventional analysis of spectral data. The interferogram-based method operates without the use of a separate background measurement and employs only a short section of the interferogram.

  5. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  6. Oral cancer calibration and diagnosis among professionals from the public health in São Paulo, Brazil.

    PubMed

    Alves, José Carlos; da Silva, Renato Pereira; Cortellazzi, Karine Laura; Vazquez, Fabiana de Lima; Marques, Regina Auxiliadora de Amorim; Pereira, Antonio Carlos; Meneghim, Marcelo de Castro; Mialhe, Fábio Luiz

    2013-01-01

    Oral cancer is a public health problem responsible for 13% of deaths worldwide in 2008 and screening programs can be useful to detect individuals more vulnerable to the disease, improving its prognosis. The aim of the present study was to evaluate oral cancer calibration (in lux and in vivo methodologies) and diagnosis processes performed by dental surgeons (DSs) of the public health system in São Paulo, Brazil. Thirty-three oral cancer photographs were examined during in lux calibration, while 560 individuals were examined during in vivo calibration. Oral conditions were coded as "0 - sound tissues", "1 - buccal lesions without malignant potential" and "2 - buccal lesions with malignant potential". The final sample for oral cancer screening was composed of 336 individuals, age-range 40 years or older. Kappa values for interexaminer agreement were 0.67 and 0.45 for in lux and in vivo respectively. The accuracy of both methodologies was over 80%. Oral cancer screening revealed 48 healthy individuals, 273 oral lesions coded as "1" and 12 oral lesions coded as "2". In spite of the low reproducibility, the validity of the visual examination in oral cancer screening was satisfactory, showing its importance as part of preventive oral cancer programs and public health system campaigns.

  7. Suomi-NPP VIIRS Day-Night Band On-Orbit Calibration and Performance

    NASA Technical Reports Server (NTRS)

    Chen, Hongda; Xiong, Xiaoxiong; Sun, Chengbo; Chen, Xuexia; Chiang, Kwofu

    2017-01-01

    The Suomi national polar-orbiting partnership Visible Infrared Imaging Radiometer Suite (VIIRS) instrument has successfully operated since its launch in October 2011. The VIIRS day-night band (DNB) is a panchromatic channel covering wavelengths from 0.5 to 0.9 microns that is capable of observing Earth scenes during both daytime and nighttime at a spatial resolution of 750 m. To cover the large dynamic range, the DNB operates at low-, middle-, and high-gain stages, and it uses an on-board solar diffuser (SD) for its low-gain stage calibration. The SD observations also provide a means to compute the gain ratios of low-to-middle and middle-to-high gain stages. This paper describes the DNB on-orbit calibration methodology used by the VIIRS characterization support team in supporting the NASA Earth science community with consistent VIIRS sensor data records made available by the land science investigator-led processing systems. It provides an assessment and update of the DNB on-orbit performance, including the SD degradation in the DNB spectral range, detector gain and gain ratio trending, and stray-light contamination and its correction. Also presented in this paper are performance validations based on Earth scenes and lunar observations, and comparisons to the calibration methodology used by the operational interface data processing segment.

  8. Statistical Knowledge for Teaching: Exploring it in the Classroom

    ERIC Educational Resources Information Center

    Burgess, Tim

    2009-01-01

    This paper first reports on the methodology of a study of teacher knowledge for statistics, conducted in a classroom at the primary school level. The methodology included videotaping of a sequence of lessons that involved students in investigating multivariate data sets, followed up by audiotaped interviews with each teacher. These stimulated…

  9. Identifying Pedophiles "Eligible" for Community Notification under Megan's Law: A Multivariate Model for Actuarially Anchored Decisions.

    ERIC Educational Resources Information Center

    Pallone, Nathaniel J.; Hennessy, James J.; Voelbel, Gerald T.

    1998-01-01

    A scientifically sound methodology for identifying offenders about whose presence the community should be notified is demonstrated. A stepwise multiple regression was calculated among incarcerated pedophiles (N=52) including both psychological and legal data; a precision-weighted equation produced 90.4% "true positives." This methodology can be…

  10. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  11. Predictive and mechanistic multivariate linear regression models for reaction development

    PubMed Central

    Santiago, Celine B.; Guo, Jing-Yao

    2018-01-01

    Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711

  12. Principal Cluster Axes: A Projection Pursuit Index for the Preservation of Cluster Structures in the Presence of Data Reduction

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.; Henson, Robert

    2012-01-01

    A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…

  13. Multi-Variable and Multi-Site Calibration and Validation of SWAT for Water Quality in the Kaskaskia River Watershed

    EPA Science Inventory

    The Future Midwest Landscape (FML) project is part of the U.S. Environmental Protection Agency’s new Ecosystem Services Research Program, undertaken to examine the variety of ways in which landscapes that include crop lands, conservation areas, wetlands, lakes and streams affect ...

  14. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  15. Multivariate approaches for stability control of the olive oil reference materials for sensory analysis - part II: applications.

    PubMed

    Valverde-Som, Lucia; Ruiz-Samblás, Cristina; Rodríguez-García, Francisco P; Cuadros-Rodríguez, Luis

    2018-02-09

    The organoleptic quality of virgin olive oil depends on positive and negative sensory attributes. These attributes are related to volatile organic compounds and phenolic compounds that represent the aroma and taste (flavour) of the virgin olive oil. The flavour is the characteristic that can be measured by a taster panel. However, as for any analytical measuring device, the tasters, individually, and the panel, as a whole, should be harmonized and validated and proper olive oil standards are needed. In the present study, multivariate approaches are put into practice in addition to the rules to build a multivariate control chart from chromatographic volatile fingerprinting and chemometrics. Fingerprinting techniques provide analytical information without identify and quantify the analytes. This methodology is used to monitor the stability of sensory reference materials. The similarity indices have been calculated to build multivariate control chart with two olive oils certified reference materials that have been used as examples to monitor their stabilities. This methodology with chromatographic data could be applied in parallel with the 'panel test' sensory method to reduce the work of sensory analysis. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  16. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Simultaneous determination of Nifuroxazide and Drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis

    NASA Astrophysics Data System (ADS)

    Metwally, Fadia H.

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  18. Fiber-optic evanescent-wave spectroscopy for fast multicomponent analysis of human blood

    NASA Astrophysics Data System (ADS)

    Simhi, Ronit; Gotshal, Yaron; Bunimovich, David; Katzir, Abraham; Sela, Ben-Ami

    1996-07-01

    A spectral analysis of human blood serum was undertaken by fiber-optic evanescent-wave spectroscopy (FEWS) by the use of a Fourier-transform infrared spectrometer. A special cell for the FEWS measurements was designed and built that incorporates an IR-transmitting silver halide fiber and a means for introducing the blood-serum sample. Further improvements in analysis were obtained by the adoption of multivariate calibration techniques that are already used in clinical chemistry. The partial least-squares algorithm was used to calculate the concentrations of cholesterol, total protein, urea, and uric acid in human blood serum. The estimated prediction errors obtained (in percent from the average value) were 6% for total protein, 15% for cholesterol, 30% for urea, and 30% for uric acid. These results were compared with another independent prediction method that used a neural-network model. This model yielded estimated prediction errors of 8.8% for total protein, 25% for cholesterol, and 21% for uric acid. spectroscopy, fiber-optic evanescent-wave spectroscopy, Fourier-transform infrared spectrometer, blood, multivariate calibration, neural networks.

  19. Multivariate curve resolution-assisted determination of pseudoephedrine and methamphetamine by HPLC-DAD in water samples.

    PubMed

    Vosough, Maryam; Mohamedian, Hadi; Salemi, Amir; Baheri, Tahmineh

    2015-02-01

    In the present study, a simple strategy based on solid-phase extraction (SPE) with a cation exchange sorbent (Finisterre SCX) followed by fast high-performance liquid chromatography (HPLC) with diode array detection coupled with chemometrics tools has been proposed for the determination of methamphetamine and pseudoephedrine in ground water and river water. At first, the HPLC and SPE conditions were optimized and the analytical performance of the method was determined. In the case of ground water, determination of analytes was successfully performed through univariate calibration curves. For river water sample, multivariate curve resolution and alternating least squares was implemented and the second-order advantage was achieved in samples containing uncalibrated interferences and uncorrected background signals. The calibration curves showed good linearity (r(2) > 0.994).The limits of detection for pseudoephedrine and methamphetamine were 0.06 and 0.08 μg/L and the average recovery values were 104.7 and 102.3% in river water, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Arafa, Reham M.; Abbas, Samah S.; Amer, Sawsan M.

    2016-01-01

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL- 1. Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method.

  1. Detection of Butter Adulteration with Lard by Employing (1)H-NMR Spectroscopy and Multivariate Data Analysis.

    PubMed

    Fadzillah, Nurrulhidayah Ahmad; Man, Yaakob bin Che; Rohman, Abdul; Rosman, Arieff Salleh; Ismail, Amin; Mustafa, Shuhaimi; Khatib, Alfi

    2015-01-01

    The authentication of food products from the presence of non-allowed components for certain religion like lard is very important. In this study, we used proton Nuclear Magnetic Resonance ((1)H-NMR) spectroscopy for the analysis of butter adulterated with lard by simultaneously quantification of all proton bearing compounds, and consequently all relevant sample classes. Since the spectra obtained were too complex to be analyzed visually by the naked eyes, the classification of spectra was carried out.The multivariate calibration of partial least square (PLS) regression was used for modelling the relationship between actual value of lard and predicted value. The model yielded a highest regression coefficient (R(2)) of 0.998 and the lowest root mean square error calibration (RMSEC) of 0.0091% and root mean square error prediction (RMSEP) of 0.0090, respectively. Cross validation testing evaluates the predictive power of the model. PLS model was shown as good models as the intercept of R(2)Y and Q(2)Y were 0.0853 and -0.309, respectively.

  2. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2014-09-01

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

  3. Gaussian-based routines to impute categorical variables in health surveys.

    PubMed

    Yucel, Recai M; He, Yulei; Zaslavsky, Alan M

    2011-12-20

    The multivariate normal (MVN) distribution is arguably the most popular parametric model used in imputation and is available in most software packages (e.g., SAS PROC MI, R package norm). When it is applied to categorical variables as an approximation, practitioners often either apply simple rounding techniques for ordinal variables or create a distinct 'missing' category and/or disregard the nominal variable from the imputation phase. All of these practices can potentially lead to biased and/or uninterpretable inferences. In this work, we develop a new rounding methodology calibrated to preserve observed distributions to multiply impute missing categorical covariates. The major attractiveness of this method is its flexibility to use any 'working' imputation software, particularly those based on MVN, allowing practitioners to obtain usable imputations with small biases. A simulation study demonstrates the clear advantage of the proposed method in rounding ordinal variables and, in some scenarios, its plausibility in imputing nominal variables. We illustrate our methods on a widely used National Survey of Children with Special Health Care Needs where incomplete values on race posed a valid threat on inferences pertaining to disparities. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Visible Infrared Imaging Radiometer Suite (VIIRS) and uncertainty in the ocean color calibration methodology

    NASA Astrophysics Data System (ADS)

    Turpie, Kevin R.; Eplee, Robert E.; Meister, Gerhard

    2015-09-01

    During the first few years of the Suomi National Polar-orbiting Partnership (NPP) mission, the NASA Ocean Color calibration team continued to improve on their approach to the on-orbit calibration of the Visible Infrared Imaging Radiometer Suite (VIIRS). As the calibration was adjusted for changes in ocean band responsitivity, the team also estimated a theoretic residual error in the calibration trends well within a few tenths of a percent, which could be translated into trend uncertainties in regional time series of surface reflectance and derived products, where biases as low as a few tenths of a percent in certain bands can lead to significant effects. This study looks at effects from spurious trends inherent to the calibration and biases that arise between reprocessing efforts because of extrapolation of the timedependent calibration table. With the addition of new models for instrument and calibration system trend artifacts, new calibration trends led to improved estimates of ocean time series uncertainty. Table extrapolation biases are presented for the first time. The results further the understanding of uncertainty in measuring regional and global biospheric trends in the ocean using VIIRS, which better define the roles of such records in climate research.

  5. Autocalibration method for non-stationary CT bias correction.

    PubMed

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Application of virtual distances methodology to laser tracker verification with an indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.

    2015-11-01

    High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.

  7. Potential and Limitations of an Improved Method to Produce Dynamometric Wheels

    PubMed Central

    García de Jalón, Javier

    2018-01-01

    A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427

  8. Use of Numerical Groundwater Model and Analytical Empirical Orthogonal Function for Calibrating Spatiotemporal pattern of Pumpage, Recharge and Parameter

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.

    2016-12-01

    This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.

  9. Calibration of a distributed hydrologic model for six European catchments using remote sensing data

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.

    2017-12-01

    While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.

  10. Analysis of characteristics of Si in blast furnace pig iron and calibration methods in the detection by laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Mei, Yaguang; Cheng, Yuxin; Cheng, Shusen; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Zeng, Xiaoyan

    2017-10-01

    During the iron-making process in blast furnace, the Si content in liquid pig iron was usually used to evaluate the quality of liquid iron and thermal state of blast furnace. None effective method was found for rapid detecting the Si concentration of liquid iron. Laser-induced breakdown spectroscopy (LIBS) is a kind of atomic emission spectrometry technology based on laser ablation. Its obvious advantage is realizing rapid, in-situ, online analysis of element concentration in open air without sample pretreatment. The characteristics of Si in liquid iron were analyzed from the aspect of thermodynamic theory and metallurgical technology. The relationship between Si and C, Mn, S, P or other alloy elements were revealed based on thermodynamic calculation. Subsequently, LIBS was applied on rapid detection of Si of pig iron in this work. During LIBS detection process, several groups of standard pig iron samples were employed in this work to calibrate the Si content in pig iron. The calibration methods including linear, quadratic and cubic internal standard calibration, multivariate linear calibration and partial least squares (PLS) were compared with each other. It revealed that the PLS improved by normalization was the best calibration method for Si detection by LIBS.

  11. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

    PubMed Central

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005

  12. Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor

    USGS Publications Warehouse

    Chander, G.; Meyer, D.J.; Helder, D.L.

    2004-01-01

    As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.

  13. A robust approach to using of the redundant information in the temperature calibration

    NASA Astrophysics Data System (ADS)

    Strnad, R.; Kňazovická, L.; Šindelář, M.; Kukal, J.

    2013-09-01

    In the calibration laboratories are used standard procedures for calculating of the calibration model coefficients based on well described standards (EN 60751, ITS-90, EN 60584, etc.). In practice, sensors are mostly calibrated in more points and redundant information is used as a validation of the model. This paper will present the influence of including all measured points with respect to their uncertainties to the measured models using standard weighted least square methods. A special case with regards of the different level of the uncertainty of the measured points in case of the robust approach will be discussed. This will go to the different minimization criteria and different uncertainty propagation methodology. This approach also will eliminate of the influence of the outline measurements in the calibration. In practical part will be three cases of this approach presented, namely industrial calibration according to the standard EN 60751, SPRT according to the ITS-90 and thermocouple according to the standard EN 60584.

  14. Investigating the Effects of Variable Water Type for VIIRS Calibration

    NASA Astrophysics Data System (ADS)

    Bowers, J.; Ladner, S.; Martinolich, P.; Arnone, R.; Lawson, A.; Crout, R. L.; Vandermeulen, R. A.

    2016-02-01

    The Naval Research Laboratory - Stennis Space Center (NRL-SSC) currently provides calibration and validation support for the Visible Infrared Imaging Radiometer Suite (VIIRS) satellite ocean color products. NRL-SSC utilizes the NASA Ocean Biology Processing Group (OBPG) methodology for on-orbit vicarious calibration with in situ data collected in blue ocean water by the Marine Optical Buoy (MOBY). An acceptable calibration consists of 20-40 satellite to in situ matchups that establish the radiance correlation at specific points within the operating range of the VIIRS instrument. While the current method improves the VIIRS performance, the MOBY data alone does not represent the full range of radiance values seen in the coastal oceans. However, by utilizing data from the AERONET-OC coastal sites we expand our calibration matchups to cover a more realistic range of continuous values particularly in the green and red spectral regions of the sensor. Improved calibration will provide more accurate data to support daily operations and enable construction of valid climatology for future reference.

  15. Using inductively coupled plasma-mass spectrometry for calibration transfer between environmental CRMs.

    PubMed

    Turk, G C; Yu, L L; Salit, M L; Guthrie, W F

    2001-06-01

    Multielement analyses of environmental reference materials have been performed using existing certified reference materials (CRMs) as calibration standards for inductively coupled plasma-mass spectrometry. The analyses have been performed using a high-performance methodology that results in comparison measurement uncertainties that are significantly less than the uncertainties of the certified values of the calibration CRM. Consequently, the determined values have uncertainties that are very nearly equivalent to the uncertainties of the calibration CRM. Several uses of this calibration transfer are proposed, including, re-certification measurements of replacement CRMs, establishing traceability of one CRM to another, and demonstrating the equivalence of two CRMs. RM 8704, a river sediment, was analyzed using SRM 2704, Buffalo River Sediment, as the calibration standard. SRM 1632c, Trace Elements in Bituminous Coal, which is a replacement for SRM 1632b, was analyzed using SRM 1632b as the standard. SRM 1635, Trace Elements in Subbituminous Coal, was also analyzed using SRM 1632b as the standard.

  16. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  17. Improvements to the Hubble Space Telescope COS/FUV Wavelength Calibration at Lifetime Position 4

    NASA Astrophysics Data System (ADS)

    Plesha, Rachel; Ake, Thomas B.; De Rosa, Gisella; Oliveira, Cristina M.; Penton, Steven V.; Snyder, Elaine M.

    2018-06-01

    The Cosmic Origins Spectrograph (COS) was installed on the Hubble Space Telescope in 2009, and the FUV detector is currently operating at the 4th lifetime position (LP4). The COS team at the Space Telescope Science Institute has been improving the wavelength calibration of the FUV channel at each lifetime position. For the LP4 solution we obtained special calibration data as well as new lamp spectra to update the lamp template used at LP4 with the goal of achieving a wavelength calibration accuracy of ± 3 pixels. Additionally, we derived a new solution for the G130M/1222 cenwave which we expect to be more frequently used at this lifetime position due to the COS2025 policy in place on the other G130M settings. Here we present the results and methodology behind the wavelength calibration solutions at LP4.

  18. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  19. An international marine-atmospheric {sup 222}Rn measurement intercomparison in Bermuda. Part 1: NIST calibration and methodology for standardized sample additions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colle, R.; Unterweger, M.P.; Hodge, P.A.

    1996-01-01

    As part of an international {sup 222}Rn measurement intercomparison conducted at Bermuda in October 1991, NIST provided standardized sample additions of known, but undisclosed (blind) {sup 222}Rn concentrations that could be related to US national standards. The standardized sample additions were obtained with a calibrated {sup 226}Ra source and a specially-designed manifold used to obtain well-known dilution factors from simultaneous flow-rate measurements. The additions were introduced over sampling periods of several hours (typically 4 h) into a common streamline on a sampling tower used by the participating laboratories for their measurements. The standardized {sup 222}Rn activity concentrations for the intercomparisonmore » ranged from approximately 2.5 Bq {center_dot} m{sup {minus}3} to 35 Bq {center_dot} m{sup {minus}3} (of which the lower end of this range approached concentration levels for ambient Bermudian air) and had overall uncertainties, approximating a 3 standard deviation uncertainty interval, of about 6% to 13%. This paper describes the calibration and methodology for the standardized sample additions.« less

  20. An International Marine-Atmospheric 222Rn Measurement Intercomparison in Bermuda Part I: NIST Calibration and Methodology for Standardized Sample Additions

    PubMed Central

    Collé, R.; Unterweger, M. P.; Hodge, P. A.; Hutchinson, J. M. R.

    1996-01-01

    As part of an international 222Rn measurement intercomparison conducted at Bermuda in October 1991, NIST provided standardized sample additions of known, but undisclosed (“blind”) 222Rn concentrations that could be related to U.S. national standards. The standardized sample additions were obtained with a calibrated 226Ra source and a specially-designed manifold used to obtain well-known dilution factors from simultaneous flow-rate measurements. The additions were introduced over sampling periods of several hours (typically 4 h) into a common streamline on a sampling tower used by the participating laboratories for their measurements. The standardized 222Rn activity concentrations for the intercomparison ranged from approximately 2.5 Bq · m−3 to 35 Bq · m−3 (of which the lower end of this range approached concentration levels for ambient Bermudian air) and had overall uncertainties, approximating a 3 standard deviation uncertainty interval, of about 6 % to 13 %. This paper describes the calibration and methodology for the standardized sample additions. PMID:27805090

  1. Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.

    PubMed

    Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs

    2009-02-01

    This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.

  2. Energy expenditure estimation in beta-blocker-medicated cardiac patients by combining heart rate and body movement data.

    PubMed

    Kraal, Jos J; Sartor, Francesco; Papini, Gabriele; Stut, Wim; Peek, Niels; Kemps, Hareld Mc; Bonomi, Alberto G

    2016-11-01

    Accurate assessment of energy expenditure provides an opportunity to monitor physical activity during cardiac rehabilitation. However, the available assessment methods, based on the combination of heart rate (HR) and body movement data, are not applicable for patients using beta-blocker medication. Therefore, we developed an energy expenditure prediction model for beta-blocker-medicated cardiac rehabilitation patients. Sixteen male cardiac rehabilitation patients (age: 55.8 ± 7.3 years, weight: 93.1 ± 11.8 kg) underwent a physical activity protocol with 11 low- to moderate-intensity common daily life activities. Energy expenditure was assessed using a portable indirect calorimeter. HR and body movement data were recorded during the protocol using unobtrusive wearable devices. In addition, patients underwent a symptom-limited exercise test and resting metabolic rate assessment. Energy expenditure estimation models were developed using multivariate regression analyses based on HR and body movement data and/or patient characteristics. In addition, a HR-flex model was developed. The model combining HR and body movement data and patient characteristics showed the highest correlation and lowest error (r 2  = 0.84, root mean squared error = 0.834 kcal/minute) with total energy expenditure. The method based on individual calibration data (HR-flex) showed lower accuracy (i 2  = 0.83, root mean squared error = 0.992 kcal/minute). Our results show that combining HR and body movement data improves the accuracy of energy expenditure prediction models in cardiac patients, similar to methods that have been developed for healthy subjects. The proposed methodology does not require individual calibration and is based on the data that are available in clinical practice. © The European Society of Cardiology 2016.

  3. Chemometrics enhanced HPLC-DAD performance for rapid quantification of carbamazepine and phenobarbital in human serum samples.

    PubMed

    Vosough, Maryam; Ghafghazi, Shiva; Sabetkasaei, Masoumeh

    2014-02-01

    This paper describes development and validation of a simple and efficient bioanalytical procedure for simultaneous determination of phenobarbital and carbamazepine in human serum samples using high performance liquid chromatography with photodiode-array detection (HPLC-DAD) regarding a fast elution methodology in less than 5 min. Briefly, this method consisted of a simple deproteinization step of serum samples followed by HPLC analysis on a Bonus-RP column using an isocratic mode of elution with acetonitrile/K2HPO4 (pH=7.5) buffer solution (45:55). Due to the presence of serum endogenous components as non-calibrated components in the sample, second-order calibration based on multivariate curve resolution-alternating least squares (MCR-ALS), has been applied on a set of absorbance matrices collected as a function of retention time and wavelengths. Acceptable resolution and quantification results were achieved in the presence of matrix interferences and the second-order advantage was fully exploited. The average recoveries for carbamazepine and phenobarbital were 89.7% and 86.1% and relative standard deviation values were lower than 9%. Additionally, computed elliptical joint confidence region (EJCR) confirmed the accuracy of the proposed method and indicated the absence of both constant and proportional errors in the predicted concentrations. The developed method enabled the determination of the analytes in different serum samples in the presence of overlapped profiles, while keeping experimental time and extraction steps at minimum. Finally, the serum concentration levels of carbamazepine in three time intervals were reported for morphine-dependents who had received carbamazepine for treating their neuropathic pain. © 2013 Elsevier B.V. All rights reserved.

  4. Measuring coronary calcium on CT images adjusted for attenuation differences.

    PubMed

    Nelson, Jennifer Clark; Kronmal, Richard A; Carr, J Jeffrey; McNitt-Gray, Michael F; Wong, Nathan D; Loria, Catherine M; Goldin, Jonathan G; Williams, O Dale; Detrano, Robert

    2005-05-01

    To quantify scanner and participant variability in attenuation values for computed tomographic (CT) images assessed for coronary calcium and define a method for standardizing attenuation values and calibrating calcium measurements. Institutional review board approval and participant informed consent were obtained at all study sites. An image attenuation adjustment method involving the use of available calibration phantom data to define standard attenuation values was developed. The method was applied to images from two population-based multicenter studies: the Coronary Artery Risk Development in Young Adults study (3041 participants) and the Multi-Ethnic Study of Atherosclerosis (6814 participants). To quantify the variability in attenuation, analysis of variance techniques were used to compare the CT numbers of standardized torso phantom regions across study sites, and multivariate linear regression models of participant-specific calibration phantom attenuation values that included participant age, race, sex, body mass index (BMI), smoking status, and site as covariates were developed. To assess the effect of the calibration method on calcium measurements, Pearson correlation coefficients between unadjusted and attenuation-adjusted calcium measurements were computed. Multivariate models were used to examine the effect of sex, race, BMI, smoking status, unadjusted score, and site on Agatston score adjustments. Mean attenuation values (CT numbers) of a standard calibration phantom scanned beneath participants varied significantly according to scanner and participant BMI (P < .001 for both). Values were lowest for Siemens multi-detector row CT scanners (110.0 HU), followed by GE-Imatron electron-beam (116.0 HU) and GE LightSpeed multi-detector row scanners (121.5 HU). Values were also lower for morbidly obese (BMI, > or =40.0 kg/m(2)) participants (108.9 HU), followed by obese (BMI, 30.0-39.9 kg/m(2)) (114.8 HU), overweight (BMI, 25.0-29.9 kg/m(2)) (118.5 HU), and normal-weight or underweight (BMI, <25.0 kg/m(2)) (120.1 HU) participants. Agatston score calibration adjustments ranged from -650 to 1071 (mean, -8 +/- 50 [standard deviation]) and increased with Agatston score (P < .001). The direction and magnitude of adjustment varied significantly according to scanner and BMI (P < .001 for both) and were consistent with phantom attenuation results in that calibration resulted in score decreases for images with higher phantom attenuation values. Image attenuation values vary by scanner and participant body size, producing calcium score differences that are not due to true calcium burden disparities. Use of calibration phantoms to adjust attenuation values and calibrate calcium measurements in research studies and clinical practice may improve the comparability of such measurements between persons scanned with different scanners and within persons over time.

  5. OARE STS-87 (USMP-4)

    NASA Technical Reports Server (NTRS)

    Rice, James E.

    1998-01-01

    The report is organized into sections representing the phases of work performed in analyzing the STS-87 (USMP-4) results. Section 1 briefly outlines the OARE system features, coordinates, and measurement parameters. Section 2 describes the results from STS-87. The mission description, data calibration, and representative data obtained on STS-87 are presented. Finally, Section 3 presents a discussion of accuracy achieved and achievable with OARE. Appendix A discusses the calibration and data processing methodology in detail.

  6. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Prinzio, Renato; Almeida, Carlos Eduardo de; Laboratorio de Ciencias Radiologicas-Universidade do Estado do Rio de Janeiro

    In Brazil there are over 100 high dose rate (HDR) brachytherapy facilities using well-type chambers for the determination of the air kerma rate of {sup 192}Ir sources. This paper presents the methodology developed and extensively tested by the Laboratorio de Ciencias Radiologicas (LCR) and presently in use to calibrate those types of chambers. The system was initially used to calibrate six well-type chambers of brachytherapy services, and the maximum deviation of only 1.0% was observed between the calibration coefficients obtained and the ones in the calibration certificate provided by the UWADCL. In addition to its traceability to the Brazilian Nationalmore » Standards, the whole system was taken to University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) for a direct comparison and the same formalism to calculate the air kerma was used. The comparison results between the two laboratories show an agreement of 0.9% for the calibration coefficients. Three Brazilian well-type chambers were calibrated at the UWADCL, and by LCR, in Brazil, using the developed system and a clinical HDR machine. The results of the calibration of three well chambers have shown an agreement better than 1.0%. Uncertainty analyses involving the measurements made both at the UWADCL and LCR laboratories are discussed.« less

  8. Gamma/Hadron Separation for the HAWC Observatory

    NASA Astrophysics Data System (ADS)

    Gerhardt, Michael J.

    The High-Altitude Water Cherenkov (HAWC) Observatory is a gamma-ray observatory sensitive to gamma rays from 100 GeV to 100 TeV with an instantaneous field of view of ˜2 sr. It is located on the Sierra Negra plateau in Mexico at an elevation of 4,100 m and began full operation in March 2015. The purpose of the detector is to study relativistic particles that are produced by interstellar and intergalactic objects such as: pulsars, supernova remnants, molecular clouds, black holes and more. To achieve optimal angular resolution, energy reconstruction and cosmic ray background suppression for the extensive air showers detected by HAWC, good timing and charge calibration are crucial, as well as optimization of quality cuts on background suppression variables. Additions to the HAWC timing calibration, in particular automating the calibration quality checks and a new method for background suppression using a multivariate analysis are presented in this thesis.

  9. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  10. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  11. 10 CFR 35.12 - Application for license, amendment, or renewal.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... precautions and instructions; (ii) Methodology for measurement of dosages or doses to be administered to patients or human research subjects; and (iii) Calibration, maintenance, and repair of instruments and...

  12. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  13. An operational epidemiological model for calibrating agent-based simulations of pandemic influenza outbreaks.

    PubMed

    Prieto, D; Das, T K

    2016-03-01

    Uncertainty of pandemic influenza viruses continue to cause major preparedness challenges for public health policymakers. Decisions to mitigate influenza outbreaks often involve tradeoff between the social costs of interventions (e.g., school closure) and the cost of uncontrolled spread of the virus. To achieve a balance, policymakers must assess the impact of mitigation strategies once an outbreak begins and the virus characteristics are known. Agent-based (AB) simulation is a useful tool for building highly granular disease spread models incorporating the epidemiological features of the virus as well as the demographic and social behavioral attributes of tens of millions of affected people. Such disease spread models provide excellent basis on which various mitigation strategies can be tested, before they are adopted and implemented by the policymakers. However, to serve as a testbed for the mitigation strategies, the AB simulation models must be operational. A critical requirement for operational AB models is that they are amenable for quick and simple calibration. The calibration process works as follows: the AB model accepts information available from the field and uses those to update its parameters such that some of its outputs in turn replicate the field data. In this paper, we present our epidemiological model based calibration methodology that has a low computational complexity and is easy to interpret. Our model accepts a field estimate of the basic reproduction number, and then uses it to update (calibrate) the infection probabilities in a way that its effect combined with the effects of the given virus epidemiology, demographics, and social behavior results in an infection pattern yielding a similar value of the basic reproduction number. We evaluate the accuracy of the calibration methodology by applying it for an AB simulation model mimicking a regional outbreak in the US. The calibrated model is shown to yield infection patterns closely replicating the input estimates of the basic reproduction number. The calibration method is also tested to replicate an initial infection incidence trend for a H1N1 outbreak like that of 2009.

  14. SPAGETTA: a Multi-Purpose Gridded Stochastic Weather Generator

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Huth, R.; Rotach, M. W.; Dabhi, H.

    2017-12-01

    SPAGETTA is a new multisite/gridded multivariate parametric stochastic weather generator (WG). Site-specific precipitation occurrence and amount are modelled by Markov chain and Gamma distribution, the non-precipitation variables are modelled by an autoregressive (AR) model conditioned on precipitation occurrence, and the spatial coherence of all variables is modelled following the Wilks' (2009) approach. SPAGETTA may be run in two modes. Mode 1: it is run as a classical WG, which is calibrated using weather series from multiple sites, and only then it may produce arbitrarily long synthetic series mimicking the spatial and temporal structure of the calibration data. To generate the weather series representing the future climate, the WG parameters are modified according to the climate change scenario, typically derived from GCM or RCM simulations. Mode 2: the user provides only basic information (not necessarily to be realistic) on the temporal and spatial auto-correlation structure of the weather variables and their mean annual cycle; the generator itself derives the parameters of the underlying AR model, which produces the multi-site weather series. Optionally, the user may add the spatially varying trend, which is superimposed to the synthetic series. The contribution consists of following parts: (a) Model of the WG. (b) Validation of WG in terms of the spatial temperature and precipitation characteristics, including characteristics of spatial hot/cold/dry/wet spells. (c) Results of the climate change impact experiment, in which the WG parameters representing the spatial and temporal variability are modified using the climate change scenarios and the effect on the above spatial validation indices is analysed. In this experiment, the WG is calibrated using the E-OBS gridded daily weather data for several European regions, and the climate change scenarios are derived from the selected RCM simulations (CORDEX database). (d) The second mode of operation will be demonstrated by results obtained while developing the methodology for assessing collective significance of trends in multi-site weather series. The performance of the proposed test statistics is assessed based on large number of realisations of synthetic series produced by WG assuming a given statistical structure and trend of the weather series.

  15. Multivariate Analysis and Prediction of Dioxin-Furan ...

    EPA Pesticide Factsheets

    Peer Review Draft of Regional Methods Initiative Final Report Dioxins, which are bioaccumulative and environmentally persistent, pose an ongoing risk to human and ecosystem health. Fish constitute a significant source of dioxin exposure for humans and fish-eating wildlife. Current dioxin analytical methods are costly, time-consuming, and produce hazardous by-products. A Danish team developed a novel, multivariate statistical methodology based on the covariance of dioxin-furan congener Toxic Equivalences (TEQs) and fatty acid methyl esters (FAMEs) and applied it to North Atlantic Ocean fishmeal samples. The goal of the current study was to attempt to extend this Danish methodology to 77 whole and composite fish samples from three trophic groups: predator (whole largemouth bass), benthic (whole flathead and channel catfish) and forage fish (composite bluegill, pumpkinseed and green sunfish) from two dioxin contaminated rivers (Pocatalico R. and Kanawha R.) in West Virginia, USA. Multivariate statistical analyses, including, Principal Components Analysis (PCA), Hierarchical Clustering, and Partial Least Squares Regression (PLS), were used to assess the relationship between the FAMEs and TEQs in these dioxin contaminated freshwater fish from the Kanawha and Pocatalico Rivers. These three multivariate statistical methods all confirm that the pattern of Fatty Acid Methyl Esters (FAMEs) in these freshwater fish covaries with and is predictive of the WHO TE

  16. Express bus-fringe parking planning methodology.

    DOT National Transportation Integrated Search

    1975-01-01

    The conception, calibration, and evaluation of alternative disaggregate behavioral models of the express bus-fringe parking travel choice situation are described. Survey data collected for the Parham Express Service in Richmond, Virginia, are used to...

  17. Experimental methodology for turbocompressor in-duct noise evaluation based on beamforming wave decomposition

    NASA Astrophysics Data System (ADS)

    Torregrosa, A. J.; Broatch, A.; Margot, X.; García-Tíscar, J.

    2016-08-01

    An experimental methodology is proposed to assess the noise emission of centrifugal turbocompressors like those of automotive turbochargers. A step-by-step procedure is detailed, starting from the theoretical considerations of sound measurement in flow ducts and examining specific experimental setup guidelines and signal processing routines. Special care is taken regarding some limiting factors that adversely affect the measuring of sound intensity in ducts, namely calibration, sensor placement and frequency ranges and restrictions. In order to provide illustrative examples of the proposed techniques and results, the methodology has been applied to the acoustic evaluation of a small automotive turbocharger in a flow bench. Samples of raw pressure spectra, decomposed pressure waves, calibration results, accurate surge characterization and final compressor noise maps and estimated spectrograms are provided. The analysis of selected frequency bands successfully shows how different, known noise phenomena of particular interest such as mid-frequency "whoosh noise" and low-frequency surge onset are correlated with operating conditions of the turbocharger. Comparison against external inlet orifice intensity measurements shows good correlation and improvement with respect to alternative wave decomposition techniques.

  18. Evaluating the role of evapotranspiration remote sensing data in improving hydrological modeling predictability

    NASA Astrophysics Data System (ADS)

    Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza

    2018-01-01

    As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.

  19. VIIRS reflective solar bands on-orbit calibration five-year update: extension and improvements

    NASA Astrophysics Data System (ADS)

    Sun, Junqiang; Wang, Menghua

    2016-09-01

    The Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has been onorbit for almost five years. VIIRS has 22 spectral bands, among which fourteen are reflective solar bands (RSB) covering a spectral range from 0.410 to 2.25 μm. The SNPP VIIRS RSB have performed very well since launch. The radiometric calibration for the RSB has also reached a mature stage after almost five years since its launch. Numerous improvements have been made in the standard RSB calibration methodology. Additionally, a hybrid calibration method, which takes the advantages of both solar diffuser calibration and lunar calibration and avoids the drawbacks of the two methods, successfully finalizes the highly accurate calibration for VIIRS RSB. The successfully calibrated RSB data record significantly impacts the ocean color products, whose stringent requirements are especially sensitive to calibration accuracy, and helps the ocean color products to reach maturity and high quality. Nevertheless, there are still many challenge issues to be investigated for further improvements of the VIIRS sensor data records (SDR). In this presentation, the robust results of the RSB calibrations and the ocean product performance will be presented. The reprocessed SDR is now in more science tests, in addition to the ocean science tests already completed one year ago, readying to be the mission-long operational SDR.

  20. Measurement of pH in whole blood by near-infrared spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries

    1999-03-01

    Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less

  1. Linear models of coregionalization for multivariate lattice data: Order-dependent and order-free cMCARs.

    PubMed

    MacNab, Ying C

    2016-08-01

    This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.

  2. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Experimental Design, Near-Infrared Spectroscopy, and Multivariate Calibration: An Advanced Project in a Chemometrics Course

    ERIC Educational Resources Information Center

    de Oliveira, Rodrigo R.; das Neves, Luiz S.; de Lima, Kassio M. G.

    2012-01-01

    A chemometrics course is offered to students in their fifth semester of the chemistry undergraduate program that includes an in-depth project. Students carry out the project over five weeks (three 8-h sessions per week) and conduct it in parallel to other courses or other practical work. The students conduct a literature search, carry out…

  4. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  5. Determination of alcohol and extract concentration in beer samples using a combined method of near-infrared (NIR) spectroscopy and refractometry.

    PubMed

    Castritius, Stefan; Kron, Alexander; Schäfer, Thomas; Rädle, Matthias; Harms, Diedrich

    2010-12-22

    A new approach of combination of near-infrared (NIR) spectroscopy and refractometry was developed in this work to determine the concentration of alcohol and real extract in various beer samples. A partial least-squares (PLS) regression, as multivariate calibration method, was used to evaluate the correlation between the data of spectroscopy/refractometry and alcohol/extract concentration. This multivariate combination of spectroscopy and refractometry enhanced the precision in the determination of alcohol, compared to single spectroscopy measurements, due to the effect of high extract concentration on the spectral data, especially of nonalcoholic beer samples. For NIR calibration, two mathematical pretreatments (first-order derivation and linear baseline correction) were applied to eliminate light scattering effects. A sample grouping of the refractometry data was also applied to increase the accuracy of the determined concentration. The root mean squared errors of validation (RMSEV) of the validation process concerning alcohol and extract concentration were 0.23 Mas% (method A), 0.12 Mas% (method B), and 0.19 Mas% (method C) and 0.11 Mas% (method A), 0.11 Mas% (method B), and 0.11 Mas% (method C), respectively.

  6. Multivariate analysis applied to the study of spatial distributions found in drug-eluting stent coatings by confocal Raman microscopy.

    PubMed

    Balss, Karin M; Long, Frederick H; Veselov, Vladimir; Orana, Argjenta; Akerman-Revis, Eugena; Papandreou, George; Maryanoff, Cynthia A

    2008-07-01

    Multivariate data analysis was applied to confocal Raman measurements on stents coated with the polymers and drug used in the CYPHER Sirolimus-eluting Coronary Stents. Partial least-squares (PLS) regression was used to establish three independent calibration curves for the coating constituents: sirolimus, poly(n-butyl methacrylate) [PBMA], and poly(ethylene-co-vinyl acetate) [PEVA]. The PLS calibrations were based on average spectra generated from each spatial location profiled. The PLS models were tested on six unknown stent samples to assess accuracy and precision. The wt % difference between PLS predictions and laboratory assay values for sirolimus was less than 1 wt % for the composite of the six unknowns, while the polymer models were estimated to be less than 0.5 wt % difference for the combined samples. The linearity and specificity of the three PLS models were also demonstrated with the three PLS models. In contrast to earlier univariate models, the PLS models achieved mass balance with better accuracy. This analysis was extended to evaluate the spatial distribution of the three constituents. Quantitative bitmap images of drug-eluting stent coatings are presented for the first time to assess the local distribution of components.

  7. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  8. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  9. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination.

    PubMed

    Yehia, Ali M; Arafa, Reham M; Abbas, Samah S; Amer, Sawsan M

    2016-01-15

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL(-1). Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Cross-calibration of the Terra MODIS, Landsat 7 ETM+ and EO-1 ALI sensors using near-simultaneous surface observation over the Railroad Valley Playa, Nevada, test site

    USGS Publications Warehouse

    Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.

    2007-01-01

    A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.

  11. Multivariate probability distribution for sewer system vulnerability assessment under data-limited conditions.

    PubMed

    Del Giudice, G; Padulano, R; Siciliano, D

    2016-01-01

    The lack of geometrical and hydraulic information about sewer networks often excludes the adoption of in-deep modeling tools to obtain prioritization strategies for funds management. The present paper describes a novel statistical procedure for defining the prioritization scheme for preventive maintenance strategies based on a small sample of failure data collected by the Sewer Office of the Municipality of Naples (IT). Novelty issues involve, among others, considering sewer parameters as continuous statistical variables and accounting for their interdependences. After a statistical analysis of maintenance interventions, the most important available factors affecting the process are selected and their mutual correlations identified. Then, after a Box-Cox transformation of the original variables, a methodology is provided for the evaluation of a vulnerability map of the sewer network by adopting a joint multivariate normal distribution with different parameter sets. The goodness-of-fit is eventually tested for each distribution by means of a multivariate plotting position. The developed methodology is expected to assist municipal engineers in identifying critical sewers, prioritizing sewer inspections in order to fulfill rehabilitation requirements.

  12. Optical See-Through Head Mounted Display Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise

    NASA Technical Reports Server (NTRS)

    Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.

    2010-01-01

    Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.

  13. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  14. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  15. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  16. A Study of the Vertical Component of Ocean Floor Vibrations in Two Geographical Chokepoints

    DTIC Science & Technology

    2017-05-30

    Libby Reissued 30 May 2017 with Second Reader’s non -NPS affiliation added to title page. THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION...choke points were considered to be a good representation of where these experimental bottom mounted sensors might be located should they be built...calibrated data, and the methodology used to get the calibrated data is discussed in detail. The results showed that one OBS out of the four was highly

  17. Relocation of Wyoming mine production blasts using calibration explosions

    USGS Publications Warehouse

    Finn, Carol A.; Kraft, Gordon D.; Sibol, Matthew S.; Jones, Ronald L.; Pulaski, Mark E.

    2001-01-01

    Given a set of well-recorded calibration events, it appears that the JHD methodology is a viable technique for improving locational accuracy of future small events where the location depends on arrival times from predominantly local and/or regional stations. In this specific case, the International Association of Seismology and the Physics of the Earth’s Interior (IASPEI) travel-time tables, coupled with JHDderived travel-time corrections, may obviate the need for an accurately known regional velocity structure in the Powder River Basin region.

  18. Thickness measurement of nontransparent free films by double-side white-light interferometry: Calibration and experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poilane, C.; Sandoz, P.; Departement d'Optique PM Duffieux, Institut FEMTO-ST, UMR CNRS 6174, Universite de Franche-Comte, 25030 Besancon, Cedex

    2006-05-15

    A double-side optical profilometer based on white-light interferometry was developed for thickness measurement of nontransparent films. The profile of the sample is measured simultaneously on both sides of the film. The resulting data allow the computation of the roughness, the flatness and the parallelism of the sides of the film, and the average thickness of the film. The key point is the apparatus calibration, i.e., the accurate determination of the distance between the reference mirrors of the complementary interferometers. Specific samples were processed for that calibration. The system is adaptable to various thickness scales as long as calibration can bemore » made accurately. A thickness accuracy better than 30 nm for films thinner than 200 {mu}m is reported with the experimental material used. In this article, we present the principle of the method as well as the calibration methodology. Limitation and accuracy of the method are discussed. Experimental results are presented.« less

  19. Case-based Reasoning for Automotive Engine Performance Tune-up

    NASA Astrophysics Data System (ADS)

    Vong, C. M.; Huang, H.; Wong, P. K.

    2010-05-01

    The automotive engine performance tune-up is greatly affected by the calibration of its electronic control unit (ECU). The ECU calibration is traditionally done by trial-and-error method. This traditional method consumes a large amount of time and money because of a large number of dynamometer tests. To resolve this problem, case based reasoning (CBR) is employed, so that an existing and effective ECU setup can be adapted to fit another similar class of engines. The adaptation procedure is done through a more sophisticated step called case-based adaptation (CBA) [1, 2]. CBA is an effective knowledge management tool, which can interactively learn the expert adaptation knowledge. The paper briefly reviews the methodologies of CBR and CBA. Then the application to ECU calibration is described via a case study. With CBR and CBA, the efficiency of calibrating an ECU can be enhanced. A prototype system has also been developed to verify the usefulness of CBR in ECU calibration.

  20. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  1. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  2. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  3. Static and (quasi)dynamic calibration of stroboscopic scanning white light interferometer

    NASA Astrophysics Data System (ADS)

    Seppä, Jeremias; Kassamakov, Ivan; Nolvi, Anton; Heikkinen, Ville; Paulin, Tor; Lassila, Antti; Hao, Ling; Hæggsröm, Edward

    2013-04-01

    A scanning white light interferometer can characterize out of plane features and motion in M(N)EMS devices. Like any other form and displacement measuring instrument, the scanning interferometer results should be linked to the metre definition to be comparable and unambiguous. Traceability is built up by careful error characterization and calibration of the interferometer. The main challenge in this calibration is to have a reference device producing accurate and reproducible dynamic out-of-plane displacement when submitted to standard loads. We use a flat mirror attached to a piezoelectric transducer for static and (quasi)dynamic calibration of a stroboscopic scanning light interferometer. First we calibrated the piezo-scanned flexure guided transducer stage using a symmetric differential heterodyne laser interferometer developed at the Centre for Metrology and Accreditation (MIKES). The standard uncertainty of the piezo stage motion calibration was 3.0 nm. Then we used the piezo-stage as a transfer standard to calibrate our stroboscopic interferometer whose light source was pulsed at 200 Hz and 400 Hz with 0.5% duty cycle. We measured the static position and (quasi)dynamic motion of the attached mirror relative to a reference surface. This methodology permits calibrating the vertical scale of the stroboscopic scanning white light interferometer.

  4. The calibration and flight test performance of the space shuttle orbiter air data system

    NASA Technical Reports Server (NTRS)

    Dean, A. S.; Mena, A. L.

    1983-01-01

    The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.

  5. Autofluorescence and diffuse reflectance patterns in cervical spectroscopy

    NASA Astrophysics Data System (ADS)

    Marin, Nena Maribel

    Fluorescence and diffuse reflectance spectroscopy are two new optical technologies, which have shown promise to aid in the real time, non-invasive identification of cancers and precancers. Spectral patterns carry a fingerprint of scattering, absorption and fluorescence properties in tissue. Scattering, absorption and fluorescence in tissue are directly affected by biological features that are diagnostically significant, such as nuclear size, micro-vessel density, volume fraction of collagen fibers, tissue oxygenation and cell metabolism. Thus, analysis of spectral patterns can unlock a wealth of information directly related with the onset and progression of disease. Data from a Phase II clinical trial to assess the technical efficacy of fluorescence and diffuse reflectance spectroscopy acquired from 850 women at three clinical locations with two research grade optical devices is calibrated and analyzed. Tools to process and standardize spectra so that data from multiple spectrometers can be combined and analyzed are presented. Methodologies for calibration and quality assurance of optical systems are established to simplify design issues and ensure validity of data for future clinical trials. Empirically based algorithms, using multivariate statistical approaches are applied to spectra and evaluated as a clinical diagnostic tool. Physically based algorithms, using mathematical models of light propagation in tissue are presented. The presented mathematical model combines a diffusion theory in P3 approximation reflectance model and a 2-layer fluorescence model using exponential attenuation and diffusion theory. The resulting adjoint fluorescence and reflectance model extracts twelve optical properties characterizing fluorescence efficiency of cervical epithelium and stroma fluorophores, stromal hemoglobin and collagen absorption, oxygen saturation, and stromal scattering strength and shape. Validations with Monte Carlo simulations show that adjoint model extracted optical properties of the epithelium and the stroma can be estimated accurately. Adjoint model is applied to 926 clinical measurements from 503 patients. Mean values of extracted optical properties have demonstrated to characterize the biological changes associated with dysplastic progression. Finally, penalized logistic regression algorithms are applied to discriminate dysplastic stages in tissue based on extracted optical features. This work provides understandable and interpretable information regarding predictive and generalization ability of optical spectroscopy in neoplastic changes using a minimum subset of optical measurements. Ultimately these methodologies would facilitate the transfer of these optical technologies into clinical practice.

  6. Global Space-Based Inter-Calibration System Reflective Solar Calibration Reference: From Aqua MODIS to S-NPP VIIRS

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Angal, Amit; Butler, James; Cao, Changyong; Doelling, Daivd; Wu, Aisheng; Wu, Xiangqian

    2016-01-01

    The MODIS has successfully operated on-board the NASA's EOS Terra and Aqua spacecraft for more than 16 and 14 years, respectively. MODIS instrument was designed with stringent calibration requirements and comprehensive on-board calibration capability. In the reflective solar spectral region, Aqua MODIS has performed better than Terra MODIS and, therefore, has been chosen by the Global Space-based Inter-Calibration System (GSICS) operational community as the calibration reference sensor in cross-sensor calibration and calibration inter-comparisons. For the same reason, it has also been used by a number of earth observing sensors as their calibration reference. Considering that Aqua MODIS has already operated for nearly 14 years, it is essential to transfer its calibration to a follow-on reference sensor with a similar calibration capability and stable performance. The VIIRS is a follow-on instrument to MODIS and has many similar design features as MODIS, including their on-board calibrators (OBC). As a result, VIIRS is an ideal candidate to replace MODIS to serve as the future GSICS reference sensor. Since launch, the S-NPP VIIRS has already operated for more than 4 years and its overall performance has been extensively characterized and demonstrated to meet its overall design requirements. This paper provides an overview of Aqua MODIS and S-NPP VIIRS reflective solar bands (RSB) calibration methodologies and strategies, traceability, and their on-orbit performance. It describes and illustrates different methods and approaches that can be used to facilitate the calibration reference transfer, including the use of desert and Antarctic sites, deep convective clouds (DCC), and the lunar observations.

  7. Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)

    NASA Astrophysics Data System (ADS)

    Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.

    1993-01-01

    The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.

  8. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  9. Multivariate analysis of organic acids in fermented food from reversed-phase high-performance liquid chromatography data.

    PubMed

    Mortera, Pablo; Zuljan, Federico A; Magni, Christian; Bortolato, Santiago A; Alarcón, Sergio H

    2018-02-01

    Multivariate calibration coupled to RP-HPLC with diode array detection (HPLC-DAD) was applied to the identification and the quantitative evaluation of the short chain organic acids (malic, oxalic, formic, lactic, acetic, citric, pyruvic, succinic, tartaric, propionic and α-cetoglutaric) in fermented food. The goal of the present study was to get the successful resolution of a system in the combined occurrence of strongly coeluting peaks, of distortions in the time sensors among chromatograms, and of the presence of unexpected compounds not included in the calibration step. Second-order HPLC-DAD data matrices were obtained in a short time (10min) on a C18 column with a chromatographic system operating in isocratic mode (mobile phase was 20mmolL -1 phosphate buffer at pH 2.20) and a flow-rate of 1.0mLmin -1 at room temperature. Parallel factor analysis (PARAFAC) and unfolded partial least-squares combined with residual bilinearization (U-PLS/RBL) were the second-order calibration algorithms select for data processing. The performance of the analytical parameters was good with an outstanding limit of detection (LODs) for acids ranging from 0.15 to 10.0mmolL -1 in the validation samples. The improved method was applied to the analysis of many dairy products (yoghurt, cultured milk and cheese) and wine. The method was shown as an effective means for determining and following acid contents in fermented food and was characterized by reducibility with simple, high resolution and rapid procedure without derivatization of analytes. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A scattering methodology for droplet sizing of e-cigarette aerosols.

    PubMed

    Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine

    2016-10-01

    Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.

  11. Calibration Testing of Network Tap Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovsky, Barbara; Chee, Brian; Frincke, Deborah A.

    2007-11-14

    Abstract: Understanding the behavior of network forensic devices is important to support prosecutions of malicious conduct on computer networks as well as legal remedies for false accusations of network management negligence. Individuals who seek to establish the credibility of network forensic data must speak competently about how the data was gathered and the potential for data loss. Unfortunately, manufacturers rarely provide information about the performance of low-layer network devices at a level that will survive legal challenges. This paper proposes a first step toward an independent calibration standard by establishing a validation testing methodology for evaluating forensic taps against manufacturermore » specifications. The methodology and the theoretical analysis that led to its development are offered as a conceptual framework for developing a standard and to "operationalize" network forensic readiness. This paper also provides details of an exemplar test, testing environment, procedures and results.« less

  12. Noise and performance calibration study of a Mach 2.2 supersonic cruise aircraft

    NASA Technical Reports Server (NTRS)

    Mascitti, V. R.; Maglieri, D. J.

    1979-01-01

    The baseline configuration of a Mach 2.2 supersonic cruise concept employing a 1980 - 1985 technology level, dry turbojet, mechanically suppressed engine, was calibrated to identify differences in noise levels and performance as determined by the methodology and ground rules used. In addition, economic and noise information is provided consistent with a previous study based on an advanced technology Mach 2.7 configuration, reported separately. Results indicate that the difference between NASA and manufacturer performance methodology is small. Resizing the aircraft to NASA groundrules results in negligible changes in takeoff noise levels (less than 1 EPNdB) but approach noise is reduced by 5.3 EPNdB as a result of increasing approach speed. For the power setting chosen, engine oversizing resulted in no reduction in traded noise. In terms of summated noise level, a 6 EPNdB reduction is realized for a 5% increase in total operating costs.

  13. Screen-printed electrode based electrochemical detector coupled with ionic liquid dispersive liquid-liquid microextraction and microvolume back-extraction for determination of mercury in water samples.

    PubMed

    Fernández, Elena; Vidal, Lorena; Martín-Yerga, Daniel; Blanco, María del Carmen; Canals, Antonio; Costa-García, Agustín

    2015-04-01

    A novel approach is presented, whereby gold nanostructured screen-printed carbon electrodes (SPCnAuEs) are combined with in-situ ionic liquid formation dispersive liquid-liquid microextraction (in-situ IL-DLLME) and microvolume back-extraction for the determination of mercury in water samples. In-situ IL-DLLME is based on a simple metathesis reaction between a water-miscible IL and a salt to form a water-immiscible IL into sample solution. Mercury complex with ammonium pyrrolidinedithiocarbamate is extracted from sample solution into the water-immiscible IL formed in-situ. Then, an ultrasound-assisted procedure is employed to back-extract the mercury into 10 µL of a 4 M HCl aqueous solution, which is finally analyzed using SPCnAuEs. Sample preparation methodology was optimized using a multivariate optimization strategy. Under optimized conditions, a linear range between 0.5 and 10 µg L(-1) was obtained with a correlation coefficient of 0.997 for six calibration points. The limit of detection obtained was 0.2 µg L(-1), which is lower than the threshold value established by the Environmental Protection Agency and European Union (i.e., 2 µg L(-1) and 1 µg L(-1), respectively). The repeatability of the proposed method was evaluated at two different spiking levels (3 and 10 µg L(-1)) and a coefficient of variation of 13% was obtained in both cases. The performance of the proposed methodology was evaluated in real-world water samples including tap water, bottled water, river water and industrial wastewater. Relative recoveries between 95% and 108% were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near Infrared

    NASA Technical Reports Server (NTRS)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Xiong, Xiaoxiong (Jack); Butler, James J.

    2010-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance response in a two-step method. In the first step, the spectral response of the instrument is determined using a nearly monochromatic light source, such a lamp-illuminated monochromator. Such sources only provide a relative spectral response (RSR) for the instrument, since they do not act as calibrated sources of light nor do they typically fill the field-of-view of the instrument. In the second step, the instrument views a calibrated source of broadband light, such as lamp-illuminated integrating sphere. In the traditional method, the RSR and the sphere spectral radiance are combined and, with the instrument's response, determine the absolute spectral radiance responsivity of the instrument. More recently, an absolute calibration system using widely tunable monochromatic laser systems has been developed, Using these sources, the absolute spectral responsivity (ASR) of an instrument can be determined on a wavelength-hy-wavelength basis. From these monochromatic ASRs. the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as integrating spheres. Here we describe the laser-based calibration and the traditional broad-band source-based calibration of the NPP VIIRS sensor, and compare the derived calibration coefficients for the instrument. Finally, we evaluate the impact of the new calibration approach on the on-orbit performance of the sensor.

  15. Drift-insensitive distributed calibration of probe microscope scanner in nanometer range: Virtual mode

    NASA Astrophysics Data System (ADS)

    Lapshin, Rostislav V.

    2016-08-01

    A method of distributed calibration of a probe microscope scanner is suggested. The main idea consists in a search for a net of local calibration coefficients (LCCs) in the process of automatic measurement of a standard surface, whereby each point of the movement space of the scanner can be characterized by a unique set of scale factors. Feature-oriented scanning (FOS) methodology is used as a basis for implementation of the distributed calibration permitting to exclude in situ the negative influence of thermal drift, creep and hysteresis on the obtained results. Possessing the calibration database enables correcting in one procedure all the spatial systematic distortions caused by nonlinearity, nonorthogonality and spurious crosstalk couplings of the microscope scanner piezomanipulators. To provide high precision of spatial measurements in nanometer range, the calibration is carried out using natural standards - constants of crystal lattice. One of the useful modes of the developed calibration method is a virtual mode. In the virtual mode, instead of measurement of a real surface of the standard, the calibration program makes a surface image ;measurement; of the standard, which was obtained earlier using conventional raster scanning. The application of the virtual mode permits simulation of the calibration process and detail analysis of raster distortions occurring in both conventional and counter surface scanning. Moreover, the mode allows to estimate the thermal drift and the creep velocities acting while surface scanning. Virtual calibration makes possible automatic characterization of a surface by the method of scanning probe microscopy (SPM).

  16. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  17. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  18. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  19. Determination of persimmon leaf chloride contents using near-infrared spectroscopy (NIRS).

    PubMed

    de Paz, José Miguel; Visconti, Fernando; Chiaravalle, Mara; Quiñones, Ana

    2016-05-01

    Early diagnosis of specific chloride toxicity in persimmon trees requires the reliable and fast determination of the leaf chloride content, which is usually performed by means of a cumbersome, expensive and time-consuming wet analysis. A methodology has been developed in this study as an alternative to determine chloride in persimmon leaves using near-infrared spectroscopy (NIRS) in combination with multivariate calibration techniques. Based on a training dataset of 134 samples, a predictive model was developed from their NIR spectral data. For modelling, the partial least squares regression (PLSR) method was used. The best model was obtained with the first derivative of the apparent absorbance and using just 10 latent components. In the subsequent external validation carried out with 35 external data this model reached r(2) = 0.93, RMSE = 0.16% and RPD = 3.6, with standard error of 0.026% and bias of -0.05%. From these results, the model based on NIR spectral readings can be used for speeding up the laboratory determination of chloride in persimmon leaves with only a modest loss of precision. The intermolecular interaction between chloride ions and the peptide bonds in leaf proteins through hydrogen bonding, i.e. N-H···Cl, explains the ability for chloride determinations on the basis of NIR spectra.

  20. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  1. Scalable methodology for large scale building energy improvement: Relevance of calibration in model-based retrofit analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane

    2015-05-01

    The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less

  2. High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics

    PubMed Central

    Carvalho, Carlos M.; Chang, Jeffrey; Lucas, Joseph E.; Nevins, Joseph R.; Wang, Quanli; West, Mike

    2010-01-01

    We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived “factors” as representing biological “subpathway” structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple comparisons, as well as scalability of the methodology. The models include practically relevant non-Gaussian/nonparametric components for latent structure, underlying often quite complex non-Gaussianity in multivariate expression patterns. Model search and fitting are addressed through stochastic simulation and evolutionary stochastic search methods that are exemplified in the oncogenic pathway studies. Supplementary supporting material provides more details of the applications, as well as examples of the use of freely available software tools for implementing the methodology. PMID:21218139

  3. Metrology: Calibration and measurement processes guidelines

    NASA Technical Reports Server (NTRS)

    Castrup, Howard T.; Eicke, Woodward G.; Hayes, Jerry L.; Mark, Alexander; Martin, Robert E.; Taylor, James L.

    1994-01-01

    The guide is intended as a resource to aid engineers and systems contracts in the design, implementation, and operation of metrology, calibration, and measurement systems, and to assist NASA personnel in the uniform evaluation of such systems supplied or operated by contractors. Methodologies and techniques acceptable in fulfilling metrology quality requirements for NASA programs are outlined. The measurement process is covered from a high level through more detailed discussions of key elements within the process, Emphasis is given to the flowdown of project requirements to measurement system requirements, then through the activities that will provide measurements with defined quality. In addition, innovations and techniques for error analysis, development of statistical measurement process control, optimization of calibration recall systems, and evaluation of measurement uncertainty are presented.

  4. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  5. SPAGETTA, a Gridded Weather Generator: Calibration, Validation and its Use for Future Climate

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Rotach, Mathias W.; Huth, Radan

    2017-04-01

    Spagetta is a new (started in 2016) stochastic multi-site multi-variate weather generator (WG). It can produce realistic synthetic daily (or monthly, or annual) weather series representing both present and future climate conditions at multiple sites (grids or stations irregularly distributed in space). The generator, whose model is based on the Wilks' (1999) multi-site extension of the parametric (Richardson's type) single site M&Rfi generator, may be run in two modes: In the first mode, it is run as a classical generator, which is calibrated in the first step using weather data from multiple sites, and only then it may produce arbitrarily long synthetic time series mimicking the spatial and temporal structure of the calibration weather data. To generate the weather series representing the future climate, the WG parameters are modified according to the climate change scenario, typically derived from GCM or RCM simulations. In the second mode, the user provides only basic information (not necessarily to be realistic) on the temporal and spatial auto-correlation structure of the surface weather variables and their mean annual cycle; the generator itself derives the parameters of the underlying autoregressive model, which produces the multi-site weather series. In the latter mode of operation, the user is allowed to prescribe the spatially varying trend, which is superimposed to the values produced by the generator; this feature has been implemented for use in developing the methodology for assessing significance of trends in multi-site weather series (for more details see another EGU-2017 contribution: Huth and Dubrovsky, 2017, Evaluating collective significance of climatic trends: A comparison of methods on synthetic data; EGU2017-4993). This contribution will focus on the first (classical) mode. The poster will present (a) model of the generator, (b) results of the validation tests made in terms of the spatial hot/cold/dry/wet spells, and (c) results of the pilot climate change impact experiment, in which (i) the WG parameters representing the spatial and temporal variability are modified using the climate change scenarios and then (ii) the effect on the above spatial validation indices derived from the synthetic series produced by the modified WG is analysed. In this experiment, the generator is calibrated using the E-OBS gridded daily weather data for several European regions, and the climate change scenarios are derived from the selected RCM simulation (taken from the CORDEX database).

  6. Method for predicting dry mechanical properties from wet wood and standing trees

    DOEpatents

    Meglen, Robert R.; Kelley, Stephen S.

    2003-08-12

    A method for determining the dry mechanical strength for a green wood comprising: illuminating a surface of the wood to be determined with light between 350-2,500 nm, the wood having a green moisture content; analyzing the surface using a spectrometric method, the method generating a first spectral data, and using a multivariate analysis to predict the dry mechanical strength of green wood when dry by comparing the first spectral data with a calibration model, the calibration model comprising a second spectrometric method of spectral data obtained from a reference wood having a green moisture content, the second spectral data correlated with a known mechanical strength analytical result obtained from a reference wood when dried and having a dry moisture content.

  7. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer

    Iovenitti, Joe

    2014-01-02

    The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The overall project area is 2500km2 with the Calibration Area (Dixie Valley Geothermal Wellfield) being about 170km2. The Final Scientific Report (FSR) is submitted in two parts (I and II). FSR part I presents (1) an assessment of the readily available public domain data and some proprietary data provided by terra-gen power, llc, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region. FSR Part II presents (1) 278 new gravity stations; (2) enhanced gravity-magnetic modeling; (3) 42 new ambient seismic noise survey stations; (4) an integration of the new seismic noise data with a regional seismic network; (5) a new methodology and approach to interpret this data; (5) a novel method to predict rock type and temperature based on the newly interpreted data; (6) 70 new magnetotelluric (MT) stations; (7) an integrated interpretation of the enhanced MT data set; (8) the results of a 308 station soil CO2 gas survey; (9) new conductive thermal modeling in the project area; (10) new convective modeling in the Calibration Area; (11) pseudo-convective modeling in the Calibration Area; (12) enhanced data implications and qualitative geoscience correlations at three scales (a) Regional, (b) Project, and (c) Calibration Area; (13) quantitative geostatistical exploratory data analysis; and (14) responses to nine questions posed in the proposal for this investigation. Enhanced favorability/trust maps were not generated because there was not a sufficient amount of new, fully-vetted (see below) rock type, temperature, and stress data. The enhanced seismic data did generate a new method to infer rock type and temperature (However, in the opinion of the Principal Investigator for this project, this new methodology needs to be tested and evaluated at other sites in the Basin and Range before it is used to generate the referenced maps. As in the baseline conceptual model, the enhanced findings can be applied to both the hydrothermal system and EGS in the Dixie Valley region).

  8. A methodology for investigating interdependencies between measured throughfall, meteorological variables and canopy structure on a small catchment.

    NASA Astrophysics Data System (ADS)

    Maurer, Thomas; Gustavos Trujillo Siliézar, Carlos; Oeser, Anne; Pohle, Ina; Hinz, Christoph

    2016-04-01

    In evolving initial landscapes, vegetation development depends on a variety of feedback effects. One of the less understood feedback loops is the interaction between throughfall and plant canopy development. The amount of throughfall is governed by the characteristics of the vegetation canopy, whereas vegetation pattern evolution may in turn depend on the spatio-temporal distribution of throughfall. Meteorological factors that may influence throughfall, while at the same time interacting with the canopy, are e.g. wind speed, wind direction and rainfall intensity. Our objective is to investigate how throughfall, vegetation canopy and meteorological variables interact in an exemplary eco-hydrological system in its initial development phase, in which the canopy is very heterogeneous and rapidly changing. For that purpose, we developed a methodological approach combining field methods, raster image analysis and multivariate statistics. The research area for this study is the Hühnerwasser ('Chicken Creek') catchment in Lower Lusatia, Brandenburg, Germany, where after eight years of succession, the spatial distribution of plant species is highly heterogeneous, leading to increasingly differentiated throughfall patterns. The constructed 6-ha catchment offers ideal conditions for our study due to the rapidly changing vegetation structure and the availability of complementary monitoring data. Throughfall data were obtained by 50 tipping bucket rain gauges arranged in two transects and connected via a wireless sensor network that cover the predominant vegetation types on the catchment (locust copses, dense sallow thorn bushes and reeds, base herbaceous and medium-rise small-reed vegetation, and open areas covered by moss and lichens). The spatial configuration of the vegetation canopy for each measurement site was described via digital image analysis of hemispheric photographs of the canopy using the ArcGIS Spatial Analyst, GapLight and ImageJ software. Meteorological data from two on-site weather stations (wind direction, wind speed, air temperature, air humidity, insolation, soil temperature, precipitation) were provided by the 'Research Platform Chicken Creek' (https://www.tu-cottbus.de/projekte/en/oekosysteme/startseite.html). Data were combined and multivariate statistical analysis (PCA, cluster analysis, regression trees) were conducted using the R-software to i) obtain statistical indices describing the relevant characteristics of the data and ii) to identify the determining factors for throughfall intensity. The methodology is currently tested and results will be presented. Preliminary evaluation of the image analysis approach showed only marginal, systematic deviation of results for the different software tools applied, which makes the developed workflow a viable tool for canopy characterization. Results from this study will have a broad spectrum of possible applications, for instance the development / calibration of rainfall interception models, the incorporation into eco-hydrological models, or to test the fault tolerance of wireless rainfall sensor networks.

  9. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  10. Two Analyte Calibration From The Transient Response Of Potentiometric Sensors Employed With The SIA Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartas, Raul; Mimendia, Aitor; Valle, Manel del

    2009-05-23

    Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less

  11. Rapid analysis of glucose, fructose, sucrose, and maltose in honeys from different geographic regions using fourier transform infrared spectroscopy and multivariate analysis.

    PubMed

    Wang, Jun; Kliks, Michael M; Jun, Soojin; Jackson, Mel; Li, Qing X

    2010-03-01

    Quantitative analysis of glucose, fructose, sucrose, and maltose in different geographic origin honey samples in the world using the Fourier transform infrared (FTIR) spectroscopy and chemometrics such as partial least squares (PLS) and principal component regression was studied. The calibration series consisted of 45 standard mixtures, which were made up of glucose, fructose, sucrose, and maltose. There were distinct peak variations of all sugar mixtures in the spectral "fingerprint" region between 1500 and 800 cm(-1). The calibration model was successfully validated using 7 synthetic blend sets of sugars. The PLS 2nd-derivative model showed the highest degree of prediction accuracy with a highest R(2) value of 0.999. Along with the canonical variate analysis, the calibration model further validated by high-performance liquid chromatography measurements for commercial honey samples demonstrates that FTIR can qualitatively and quantitatively determine the presence of glucose, fructose, sucrose, and maltose in multiple regional honey samples.

  12. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  13. Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.

    PubMed

    Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping

    2005-03-01

    To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.

  14. Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy*

    PubMed Central

    Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping

    2005-01-01

    To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r 2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way. PMID:15682498

  15. On the development of a new methodology in sub-surface parameterisation on the calibration of groundwater models

    NASA Astrophysics Data System (ADS)

    Klaas, D. K. S. Y.; Imteaz, M. A.; Sudiayem, I.; Klaas, E. M. E.; Klaas, E. C. M.

    2017-10-01

    In groundwater modelling, robust parameterisation of sub-surface parameters is crucial towards obtaining an agreeable model performance. Pilot point is an alternative in parameterisation step to correctly configure the distribution of parameters into a model. However, the methodology given by the current studies are considered less practical to be applied on real catchment conditions. In this study, a practical approach of using geometric features of pilot point and distribution of hydraulic gradient over the catchment area is proposed to efficiently configure pilot point distribution in the calibration step of a groundwater model. A development of new pilot point distribution, Head Zonation-based (HZB) technique, which is based on the hydraulic gradient distribution of groundwater flow, is presented. Seven models of seven zone ratios (1, 5, 10, 15, 20, 25 and 30) using HZB technique were constructed on an eogenetic karst catchment in Rote Island, Indonesia and their performances were assessed. This study also concludes some insights into the trade-off between restricting and maximising the number of pilot points and offers a new methodology for selecting pilot point properties and distribution method in the development of a physically-based groundwater model.

  16. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.

    PubMed

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-08-24

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.

  17. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors

    PubMed Central

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-01-01

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Po-Feng; Tully, R. Brent; Jacobs, Bradley A.

    In this paper, we extend the use of the tip of the red giant branch (TRGB) method to near-infrared wavelengths from the previously used I-band, using the Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3). Upon calibration of a color dependency of the TRGB magnitude, the IR TRGB yields a random uncertainty of ∼5% in relative distance. The IR TRGB methodology has an advantage over the previously used Advance Camera for Surveys F606W and F814W filter set for galaxies that suffer from severe extinction. Using the IR TRGB methodology, we obtain distances toward three principal galaxies in the Maffei/ICmore » 342 complex, which are located at low Galactic latitudes. New distance estimates using the TRGB method are 3.45{sub −0.13}{sup +0.13} Mpc for IC 342, 3.37{sub −0.23}{sup +0.32} Mpc for Maffei 1, and 3.52{sub −0.30}{sup +0.32} Mpc for Maffei 2. The uncertainties are dominated by uncertain extinction, especially for Maffei 1 and Maffei 2. Our IR calibration demonstrates the viability of the TRGB methodology for observations with the James Webb Space Telescope.« less

  19. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  20. Space-based infrared scanning sensor LOS determination and calibration using star observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang

    2015-10-01

    This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.

  1. Why are we regressing?

    PubMed

    Jupiter, Daniel C

    2012-01-01

    In this first of a series of statistical methodology commentaries for the clinician, we discuss the use of multivariate linear regression. Copyright © 2012 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Updating, upgrading, refining, calibration and implementation of trade-off analysis methodology developed for INDOT.

    DOT National Transportation Integrated Search

    2012-11-01

    As part of the ongoing evolution towards integrated highway asset management, the Indiana Department of Transportation (INDOT), : through SPR studies in 2004 and 2010, sponsored research that developed an overall framework for asset management. This ...

  3. Calibrated Methodology for Assessing Adaptation Costs for Urban Drainage Systems

    EPA Science Inventory

    Changes in precipitation patterns associated with climate change may pose significant challenges for storm water management systems across much of the U.S. In particular, adapting these systems to more intense rainfall events will require significant investment. The assessment ...

  4. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, Part-II: Models And Procedures (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.

  5. Metabolomics of Ulva lactuca Linnaeus (Chlorophyta) exposed to oil fuels: Fourier transform infrared spectroscopy and multivariate analysis as tools for metabolic fingerprint.

    PubMed

    Pilatti, Fernanda Kokowicz; Ramlov, Fernanda; Schmidt, Eder Carlos; Costa, Christopher; Oliveira, Eva Regina de; Bauer, Claudia M; Rocha, Miguel; Bouzon, Zenilda Laurita; Maraschin, Marcelo

    2017-01-30

    Fossil fuels, e.g. gasoline and diesel oil, account for substantial share of the pollution that affects marine ecosystems. Environmental metabolomics is an emerging field that may help unravel the effect of these xenobiotics on seaweeds and provide methodologies for biomonitoring coastal ecosystems. In the present study, FTIR and multivariate analysis were used to discriminate metabolic profiles of Ulva lactuca after in vitro exposure to diesel oil and gasoline, in combinations of concentrations (0.001%, 0.01%, 0.1%, and 1.0% - v/v) and times of exposure (30min, 1h, 12h, and 24h). PCA and HCA performed on entire mid-infrared spectral window were able to discriminate diesel oil-exposed thalli from the gasoline-exposed ones. HCA performed on spectral window related to the protein absorbance (1700-1500cm -1 ) enabled the best discrimination between gasoline-exposed samples regarding the time of exposure, and between diesel oil-exposed samples according to the concentration. The results indicate that the combination of FTIR with multivariate analysis is a simple and efficient methodology for metabolic profiling with potential use for biomonitoring strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Liquid chromatography with diode array detection and multivariate curve resolution for the selective and sensitive quantification of estrogens in natural waters.

    PubMed

    Pérez, Rocío L; Escandar, Graciela M

    2014-07-04

    Following the green analytical chemistry principles, an efficient strategy involving second-order data provided by liquid chromatography (LC) with diode array detection (DAD) was applied for the simultaneous determination of estriol, 17β-estradiol, 17α-ethinylestradiol and estrone in natural water samples. After a simple pre-concentration step, LC-DAD matrix data were rapidly obtained (in less than 5 min) with a chromatographic system operating isocratically. Applying a second-order calibration algorithm based on multivariate curve resolution with alternating least-squares (MCR-ALS), successful resolution was achieved in the presence of sample constituents that strongly coelute with the analytes. The flexibility of this multivariate model allowed the quantification of the four estrogens in tap, mineral, underground and river water samples. Limits of detection in the range between 3 and 13 ng L(-1), and relative prediction errors from 2 to 11% were achieved. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Assessment of annual pollutant loads in combined sewers from continuous turbidity measurements: sensitivity to calibration data.

    PubMed

    Lacour, C; Joannis, C; Chebbo, G

    2009-05-01

    This article presents a methodology for assessing annual wet weather Suspended Solids (SS) and Chemical Oxygen Demand (COD) loads in combined sewers, along with the associated uncertainties from continuous turbidity measurements. The proposed method is applied to data from various urban catchments in the cities of Paris and Nantes. The focus here concerns the impact of the number of rain events sampled for calibration (i.e. through establishing linear SS/turbidity or COD/turbidity relationships) on the uncertainty of annual pollutant load assessments. Two calculation methods are investigated, both of which rely on Monte Carlo simulations: random assignment of event-specific calibration relationships to each individual rain event, and the use of an overall relationship built from the entire available data set. Since results indicate a fairly low inter-event variability for calibration relationship parameters, an accurate assessment of pollutant loads can be derived, even when fewer than 10 events are sampled for calibration purposes. For operational applications, these results suggest that turbidity could provide a more precise evaluation of pollutant loads at lower cost than typical sampling methods.

  8. An Equation of State for Foamed Divinylbenzene (DVB) Based on Multi-Shock Response

    NASA Astrophysics Data System (ADS)

    Aslam, Tariq; Schroen, Diana; Gustavsen, Richard; Bartram, Brian

    2013-06-01

    The methodology for making foamed Divinylbenzene (DVB) is described. For a variety of initial densities, foamed DVB is examined through multi-shock compression and release experiments. Results from multi-shock experiments on LANL's 2-stage gas gun will be presented. A simple conservative Lagrangian numerical scheme, utilizing total-variation-diminishing interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It has been previously demonstrated that a single Mie-Gruneisen fitting form can replicate foam multi-shock compression response at a variety of initial densities; such a methodology will be presented for foamed DVB.

  9. An investigation of hydraulic conductivity estimation in a ground-water flow study of Northern Long Valley, New Jersey

    USGS Publications Warehouse

    Hill, Mary C.

    1985-01-01

    The purpose of this study was to develop a methodology to be used to investigate the aquifer characteristics and water supply potential of an aquifer system. In particular, the geohydrology of northern Long Valley, New Jersey, was investigated. Geohydrologic data were collected and analyzed to characterize the site. Analysis was accomplished by interpreting the available data and by using a numerical simulation of the watertable aquifer. Special attention was given to the estimation of hydraulic conductivity values and hydraulic conductivity structure which together define the hydraulic conductivity of the modeled aquifer. Hydraulic conductivity and all other aspects of the system were first estimated using the trial-and-error method of calibration. The estimation of hydraulic conductivity was improved using a least squares method to estimate hydraulic conductivity values and by improvements in the parameter structure. These efforts improved the calibration of the model far more than a preceding period of similar effort using the trial-and-error method of calibration. In addition, the proposed method provides statistical information on the reliability of estimated hydraulic conductivity values, calculated heads, and calculated flows. The methodology developed and applied in this work proved to be of substantial value in the evaluation of the aquifer considered.

  10. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    PubMed

    Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E

    2012-01-01

    Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  11. A methodology to evaluate occupational internal exposure to fluorine-18.

    PubMed

    Oliveira, C M; Dantas, A L A; Dantas, B M

    2009-11-15

    The objective of this work is to develop procedures for internal monitoring of (18)F to be applied in cases of possible incorporation of fluoride and (18)FDG, using in vivo and in vitro methods of measurements. The Na I (Tl) 8" x 4" scintillation detector installed at IRD-Whole Body Counter was calibrated for measurements with a whole body anthropomorphic phantom, simulating homogeneous distribution of (18)F in the body. The NaI(Tl) 3"x 3" scintillation detector installed at the IRD-Whole Body Counter was calibrated for in vivo measurements with a brain phantom inserted in an artificial skull, simulating (18)FDG incorporation. The HPGe detection system installed at the IRD-Bioassay Laboratory was calibrated for in vitro measurements of urine samples with 1 liter plastic bottles containing a standard liquid source. A methodology for bioassay data interpretation, based on standard ICRP models edited with the software AIDE-version 6, was established. It is concluded that in vivo measurements have sufficient sensitivity for monitoring (18)F in the forms of fluoride and (18)FDG. The use of both in vitro and in vivo bioassay data can provide useful information for the interpretation of bioassay data in cases of accidental incorporation in order to identify the chemical form of (18)F incorporated.

  12. Calibration of CT Hounsfield units for proton therapy treatment planning: use of kilovoltage and megavoltage images and comparison of parameterized methods

    NASA Astrophysics Data System (ADS)

    De Marzi, L.; Lesven, C.; Ferrand, R.; Sage, J.; Boulé, T.; Mazal, A.

    2013-06-01

    Proton beam range is of major concern, in particular, when images used for dose computations are artifacted (for example in patients with surgically treated bone tumors). We investigated several conditions and methods for determination of computed tomography Hounsfield unit (CT-HU) calibration curves, using two different conversion schemes. A stoichiometric methodology was used on either kilovoltage (kV) or megavoltage (MV) CT images and the accuracy of the calibration methods was evaluated. We then studied the effects of metal artifacts on proton dose distributions using metallic implants in rigid phantom mimicking clinical conditions. MV-CT images were used to evaluate relative proton stopping power in certain high density implants, and a methodology is proposed for accurate delineation and dose calculation, using a combined set of kV- and MV-CT images. Our results show good agreement between measurements and dose calculations or relative proton stopping power determination (<5%). The results also show that range uncertainty increases when only kV-CT images are used or when no correction is made on artifacted images. However, differences between treatment plans calculated on corrected kV-CT data and MV-CT data remained insignificant in the investigated patient case, even with streak artifacts and volume effects that reduce the accuracy of manual corrections.

  13. Evaluation of the Tropical Pacific Observing System from the Data Assimilation Perspective

    DTIC Science & Technology

    2014-01-01

    hereafter, SIDA systems) have the capacity to assimilate salinity profiles imposing a multivariate (mainly T-S) balance relationship (summarized in...Fujii et al., 2011). Current SIDA systems in operational centers generally use Ocean General Circulation Models (OGCM) with resolution typically 1...long-term (typically 20-30 years) ocean DA runs are often performed with SIDA systems in operational centers for validation and calibration of SI

  14. Nomogram Prediction of Overall Survival After Curative Irradiation for Uterine Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, YoungSeok; Yoo, Seong Yul; Kim, Mi-Sook

    Purpose: The purpose of this study was to develop a nomogram capable of predicting the probability of 5-year survival after radical radiotherapy (RT) without chemotherapy for uterine cervical cancer. Methods and Materials: We retrospectively analyzed 549 patients that underwent radical RT for uterine cervical cancer between March 1994 and April 2002 at our institution. Multivariate analysis using Cox proportional hazards regression was performed and this Cox model was used as the basis for the devised nomogram. The model was internally validated for discrimination and calibration by bootstrap resampling. Results: By multivariate regression analysis, the model showed that age, hemoglobin levelmore » before RT, Federation Internationale de Gynecologie Obstetrique (FIGO) stage, maximal tumor diameter, lymph node status, and RT dose at Point A significantly predicted overall survival. The survival prediction model demonstrated good calibration and discrimination. The bootstrap-corrected concordance index was 0.67. The predictive ability of the nomogram proved to be superior to FIGO stage (p = 0.01). Conclusions: The devised nomogram offers a significantly better level of discrimination than the FIGO staging system. In particular, it improves predictions of survival probability and could be useful for counseling patients, choosing treatment modalities and schedules, and designing clinical trials. However, before this nomogram is used clinically, it should be externally validated.« less

  15. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for quantitation of Benazepril alone and in combination with Amlodipine.

    PubMed

    Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A

    2014-04-05

    Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    PubMed

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Quality evaluation of frozen guava and yellow passion fruit pulps by NIR spectroscopy and chemometrics.

    PubMed

    Alamar, Priscila D; Caramês, Elem T S; Poppi, Ronei J; Pallone, Juliana A L

    2016-07-01

    The present study investigated the application of near infrared spectroscopy as a green, quick, and efficient alternative to analytical methods currently used to evaluate the quality (moisture, total sugars, acidity, soluble solids, pH and ascorbic acid) of frozen guava and passion fruit pulps. Fifty samples were analyzed by near infrared spectroscopy (NIR) and reference methods. Partial least square regression (PLSR) was used to develop calibration models to relate the NIR spectra and the reference values. Reference methods indicated adulteration by water addition in 58% of guava pulp samples and 44% of yellow passion fruit pulp samples. The PLS models produced lower values of root mean squares error of calibration (RMSEC), root mean squares error of prediction (RMSEP), and coefficient of determination above 0.7. Moisture and total sugars presented the best calibration models (RMSEP of 0.240 and 0.269, respectively, for guava pulp; RMSEP of 0.401 and 0.413, respectively, for passion fruit pulp) which enables the application of these models to determine adulteration in guava and yellow passion fruit pulp by water or sugar addition. The models constructed for calibration of quality parameters of frozen fruit pulps in this study indicate that NIR spectroscopy coupled with the multivariate calibration technique could be applied to determine the quality of guava and yellow passion fruit pulp. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Particle Count Limits Recommendation for Aviation Fuel

    DTIC Science & Technology

    2015-10-05

    Particle Counter Methodology • Particle counts are taken utilizing calibration methodologies and standardized cleanliness code ratings – ISO 11171 – ISO...Limits Receipt Vehicle Fuel Tank Fuel Injector Aviation Fuel DEF (AUST) 5695B 18/16/13 Parker 18/16/13 14/10/7 Pamas / Parker / Particle Solutions 19/17...12 U.S. DOD 19/17/14/13* Diesel Fuel World Wide Fuel Charter 5th 18/16/13 DEF (AUST) 5695B 18/16/13 Caterpillar 18/16/13 Detroit Diesel 18/16/13 MTU

  19. Assessing Principal Component Regression Prediction of Neurochemicals Detected with Fast-Scan Cyclic Voltammetry

    PubMed Central

    2011-01-01

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook’s distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards. PMID:21966586

  20. Assessing principal component regression prediction of neurochemicals detected with fast-scan cyclic voltammetry.

    PubMed

    Keithley, Richard B; Wightman, R Mark

    2011-06-07

    Principal component regression is a multivariate data analysis approach routinely used to predict neurochemical concentrations from in vivo fast-scan cyclic voltammetry measurements. This mathematical procedure can rapidly be employed with present day computer programming languages. Here, we evaluate several methods that can be used to evaluate and improve multivariate concentration determination. The cyclic voltammetric representation of the calculated regression vector is shown to be a valuable tool in determining whether the calculated multivariate model is chemically appropriate. The use of Cook's distance successfully identified outliers contained within in vivo fast-scan cyclic voltammetry training sets. This work also presents the first direct interpretation of a residual color plot and demonstrated the effect of peak shifts on predicted dopamine concentrations. Finally, separate analyses of smaller increments of a single continuous measurement could not be concatenated without substantial error in the predicted neurochemical concentrations due to electrode drift. Taken together, these tools allow for the construction of more robust multivariate calibration models and provide the first approach to assess the predictive ability of a procedure that is inherently impossible to validate because of the lack of in vivo standards.

  1. Estimation of railroad capacity using parametric methods.

    DOT National Transportation Integrated Search

    2013-12-01

    This paper reviews different methodologies used for railroad capacity estimation and presents a user-friendly method to measure capacity. The objective of this paper is to use multivariate regression analysis to develop a continuous relation of the d...

  2. Tracking problem solving by multivariate pattern analysis and Hidden Markov Model algorithms.

    PubMed

    Anderson, John R

    2012-03-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application involves using fMRI activity to track what students are doing as they solve a sequence of algebra problems. The methodology achieves considerable accuracy at determining both what problem-solving step the students are taking and whether they are performing that step correctly. The second "model discovery" application involves using statistical model evaluation to determine how many substates are involved in performing a step of algebraic problem solving. This research indicates that different steps involve different numbers of substates and these substates are associated with different fluency in algebra problem solving. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Multivariate space - time analysis of PRE-STORM precipitation

    NASA Technical Reports Server (NTRS)

    Polyak, Ilya; North, Gerald R.; Valdes, Juan B.

    1994-01-01

    This paper presents the methodologies and results of the multivariate modeling and two-dimensional spectral and correlation analysis of PRE-STORM rainfall gauge data. Estimated parameters of the models for the specific spatial averages clearly indicate the eastward and southeastward wave propagation of rainfall fluctuations. A relationship between the coefficients of the diffusion equation and the parameters of the stochastic model of rainfall fluctuations is derived that leads directly to the exclusive use of rainfall data to estimate advection speed (about 12 m/s) as well as other coefficients of the diffusion equation of the corresponding fields. The statistical methodology developed here can be used for confirmation of physical models by comparison of the corresponding second-moment statistics of the observed and simulated data, for generating multiple samples of any size, for solving the inverse problem of the hydrodynamic equations, and for application in some other areas of meteorological and climatological data analysis and modeling.

  4. A methodology for multivariate phenotype-based genome-wide association studies to mine pleiotropic genes.

    PubMed

    Park, Sung Hee; Lee, Ji Young; Kim, Sangsoo

    2011-01-01

    Current Genome-Wide Association Studies (GWAS) are performed in a single trait framework without considering genetic correlations between important disease traits. Hence, the GWAS have limitations in discovering genetic risk factors affecting pleiotropic effects. This work reports a novel data mining approach to discover patterns of multiple phenotypic associations over 52 anthropometric and biochemical traits in KARE and a new analytical scheme for GWAS of multivariate phenotypes defined by the discovered patterns. This methodology applied to the GWAS for multivariate phenotype highLDLhighTG derived from the predicted patterns of the phenotypic associations. The patterns of the phenotypic associations were informative to draw relations between plasma lipid levels with bone mineral density and a cluster of common traits (Obesity, hypertension, insulin resistance) related to Metabolic Syndrome (MS). A total of 15 SNPs in six genes (PAK7, C20orf103, NRIP1, BCL2, TRPM3, and NAV1) were identified for significant associations with highLDLhighTG. Noteworthy findings were that the significant associations included a mis-sense mutation (PAK7:R335P), a frame shift mutation (C20orf103) and SNPs in splicing sites (TRPM3). The six genes corresponded to rat and mouse quantitative trait loci (QTLs) that had shown associations with the common traits such as the well characterized MS and even tumor susceptibility. Our findings suggest that the six genes may play important roles in the pleiotropic effects on lipid metabolism and the MS, which increase the risk of Type 2 Diabetes and cardiovascular disease. The use of the multivariate phenotypes can be advantageous in identifying genetic risk factors, accounting for the pleiotropic effects when the multivariate phenotypes have a common etiological pathway.

  5. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  6. Asteroids: Does Space Weathering Matter?

    NASA Technical Reports Server (NTRS)

    Gaffey, Michael J.

    2001-01-01

    The interpretive calibrations and methodologies used to extract mineralogy from asteroidal spectra appear to remain valid until the space weathering process is advanced to a degree which appears to be rare or absent on asteroid surfaces. Additional information is contained in the original extended abstract.

  7. Three-dimensional Modeling of Water Quality and Ecology in Narragansett Bay

    EPA Science Inventory

    This report presents the methodology to apply, calibrate, and validate the three-dimensional water quality and ecological model provided with the Environmental Fluid Dynamics Code (EFDC). The required advection and dispersion mechanisms are generated simultaneously by the EFDC h...

  8. A METHODOLOGY FOR ESTIMATING UNCERTAINTY OF A DISTRIBUTED HYDROLOGIC MODEL: APPLICATION TO POCONO CREEK WATERSHED

    EPA Science Inventory

    Utility of distributed hydrologic and water quality models for watershed management and sustainability studies should be accompanied by rigorous model uncertainty analysis. However, the use of complex watershed models primarily follows the traditional {calibrate/validate/predict}...

  9. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  11. Correlation of porous and functional properties of food materials by NMR relaxometry and multivariate analysis.

    PubMed

    Haiduc, Adrian Marius; van Duynhoven, John

    2005-02-01

    The porous properties of food materials are known to determine important macroscopic parameters such as water-holding capacity and texture. In conventional approaches, understanding is built from a long process of establishing macrostructure-property relations in a rational manner. Only recently, multivariate approaches were introduced for the same purpose. The model systems used here are oil-in-water emulsions, stabilised by protein, and form complex structures, consisting of fat droplets dispersed in a porous protein phase. NMR time-domain decay curves were recorded for emulsions with varied levels of fat, protein and water. Hardness, dry matter content and water drainage were determined by classical means and analysed for correlation with the NMR data with multivariate techniques. Partial least squares can calibrate and predict these properties directly from the continuous NMR exponential decays and yields regression coefficients higher than 82%. However, the calibration coefficients themselves belong to the continuous exponential domain and do little to explain the connection between NMR data and emulsion properties. Transformation of the NMR decays into a discreet domain with non-negative least squares permits the use of multilinear regression (MLR) on the resulting amplitudes as predictors and hardness or water drainage as responses. The MLR coefficients show that hardness is highly correlated with the components that have T2 distributions of about 20 and 200 ms whereas water drainage is correlated with components that have T2 distributions around 400 and 1800 ms. These T2 distributions very likely correlate with water populations present in pores with different sizes and/or wall mobility. The results for the emulsions studied demonstrate that NMR time-domain decays can be employed to predict properties and to provide insight in the underlying microstructural features.

  12. An attempt at predicting blood β-hydroxybutyrate from Fourier-transform mid-infrared spectra of milk using multivariate mixed models in Polish dairy cattle.

    PubMed

    Belay, T K; Dagnachew, B S; Kowalski, Z M; Ådnøy, T

    2017-08-01

    Fourier transform mid-infrared (FT-MIR) spectra of milk are commonly used for phenotyping of traits of interest through links developed between the traits and milk FT-MIR spectra. Predicted traits are then used in genetic analysis for ultimate phenotypic prediction using a single-trait mixed model that account for cows' circumstances at a given test day. Here, this approach is referred to as indirect prediction (IP). Alternatively, FT-MIR spectral variable can be kept multivariate in the form of factor scores in REML and BLUP analyses. These BLUP predictions, including phenotype (predicted factor scores), were converted to single-trait through calibration outputs; this method is referred to as direct prediction (DP). The main aim of this study was to verify whether mixed modeling of milk spectra in the form of factors scores (DP) gives better prediction of blood β-hydroxybutyrate (BHB) than the univariate approach (IP). Models to predict blood BHB from milk spectra were also developed. Two data sets that contained milk FT-MIR spectra and other information on Polish dairy cattle were used in this study. Data set 1 (n = 826) also contained BHB measured in blood samples, whereas data set 2 (n = 158,028) did not contain measured blood values. Part of data set 1 was used to calibrate a prediction model (n = 496) and the remaining part of data set 1 (n = 330) was used to validate the calibration models, as well as to evaluate the DP and IP approaches. Dimensions of FT-MIR spectra in data set 2 were reduced either into 5 or 10 factor scores (DP) or into a single trait (IP) with calibration outputs. The REML estimates for these factor scores were found using WOMBAT. The BLUP values and predicted BHB for observations in the validation set were computed using the REML estimates. Blood BHB predicted from milk FT-MIR spectra by both approaches were regressed on reference blood BHB that had not been used in the model development. Coefficients of determination in cross-validation for untransformed blood BHB were from 0.21 to 0.32, whereas that for the log-transformed BHB were from 0.31 to 0.38. The corresponding estimates in validation were from 0.29 to 0.37 and 0.21 to 0.43, respectively, for untransformed and logarithmic BHB. Contrary to expectation, slightly better predictions of BHB were found when univariate variance structure was used (IP) than when multivariate covariance structures were used (DP). Conclusive remarks on the importance of keeping spectral data in multivariate form for prediction of phenotypes may be found in data sets where the trait of interest has strong relationships with spectral variables. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  13. Method of predicting mechanical properties of decayed wood

    DOEpatents

    Kelley, Stephen S.

    2003-07-15

    A method for determining the mechanical properties of decayed wood that has been exposed to wood decay microorganisms, comprising: a) illuminating a surface of decayed wood that has been exposed to wood decay microorganisms with wavelengths from visible and near infrared (VIS-NIR) spectra; b) analyzing the surface of the decayed wood using a spectrometric method, the method generating a first spectral data of wavelengths in VIS-NIR spectra region; and c) using a multivariate analysis to predict mechanical properties of decayed wood by comparing the first spectral data with a calibration model, the calibration model comprising a second spectrometric method of spectral data of wavelengths in VIS-NIR spectra obtained from a reference decay wood, the second spectral data being correlated with a known mechanical property analytical result obtained from the reference decayed wood.

  14. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.

    ERIC Educational Resources Information Center

    Marcoulides, George A.

    1995-01-01

    A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)

  16. Seismic Hazard Estimates Using Ill-defined Macroseismic Data at Site

    NASA Astrophysics Data System (ADS)

    Albarello, D.; Mucciarelli, M.

    - A new approach is proposed to the seismic hazard estimate based on documentary data concerning local history of seismic effects. The adopted methodology allows for the use of ``poor'' data, such as the macroseismic ones, within a formally coherent approach that permits overcoming a number of problems connected to the forcing of available information in the frame of ``standard'' methodologies calibrated on the use of instrumental data. The use of the proposed methodology allows full exploitation of all the available information (that for many towns in Italy covers several centuries) making possible a correct use of macroseismic data characterized by different levels of completeness and reliability. As an application of the proposed methodology, seismic hazard estimates are presented for two towns located in Northern Italy: Bologna and Carpi.

  17. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback

    PubMed Central

    Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958

  18. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback.

    PubMed

    Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.

  19. Calibration of pavement response models for the mechanistic-empirical pavement design method

    DOT National Transportation Integrated Search

    2007-09-01

    Most pavement design methodologies assume that the tire-pavement contact stress is equal to the tire inflation pressure and uniformly distributed over a circular contact area. However, tire-pavement contact area is not in a circular shape and the con...

  20. Tuning a fuzzy controller using quadratic response surfaces

    NASA Technical Reports Server (NTRS)

    Schott, Brian; Whalen, Thomas

    1992-01-01

    Response surface methodology, an alternative method to traditional tuning of a fuzzy controller, is described. An example based on a simulated inverted pendulum 'plant' shows that with (only) 15 trial runs, the controller can be calibrated using a quadratic form to approximate the response surface.

  1. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  2. Fermentation process tracking through enhanced spectral calibration modeling.

    PubMed

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  3. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near-Infrared

    NASA Technical Reports Server (NTRS)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Butler, James J.; Schwarting, Thomas; Turpie, Kevin; Moyer, David; DeLuccia, Frank; Moeller, Christopher

    2015-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance responsivity in a two-step method. In the first step, the relative spectral response (RSR) of the instrument is determined using a nearly monochromatic light source such as a lamp-illuminated monochromator. These sources do not typically fill the field-of-view of the instrument nor act as calibrated sources of light. Consequently, they only provide a relative (not absolute) spectral response for the instrument. In the second step, the instrument views a calibrated source of broadband light, such as a lamp-illuminated integrating sphere. The RSR and the sphere absolute spectral radiance are combined to determine the absolute spectral radiance responsivity (ASR) of the instrument. More recently, a full-aperture absolute calibration approach using widely tunable monochromatic lasers has been developed. Using these sources, the ASR of an instrument can be determined in a single step on a wavelength-by-wavelength basis. From these monochromatic ASRs, the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as lamp-illuminated integrating spheres. In this work, the traditional broadband source-based calibration of the Suomi National Preparatory Project (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor is compared with the laser-based calibration of the sensor. Finally, the impact of the new full-aperture laser-based calibration approach on the on-orbit performance of the sensor is considered.

  4. Comparison of two methodologies for calibrating satellite instruments in the visible and near infrared

    PubMed Central

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Butler, James J.; Schwarting, Thomas; Moyer, David; Turpie, Kevin; DeLuccia, Frank; Moeller, Christopher

    2016-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance responsivity in a two-step method. In the first step, the relative spectral response (RSR) of the instrument is determined using a nearly monochromatic light source such as a lamp-illuminated monochromator. These sources do not typically fill the field-of-view of the instrument nor act as calibrated sources of light. Consequently, they only provide a relative (not absolute) spectral response for the instrument. In the second step, the instrument views a calibrated source of broadband light, such as a lamp-illuminated integrating sphere. The RSR and the sphere absolute spectral radiance are combined to determine the absolute spectral radiance responsivity (ASR) of the instrument. More recently, a full-aperture absolute calibration approach using widely tunable monochromatic lasers has been developed. Using these sources, the ASR of an instrument can be determined in a single step on a wavelength-by-wavelength basis. From these monochromatic ASRs, the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as integrating spheres. In this work, the traditional broadband source-based calibration of the Suomi National Preparatory Project (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor is compared with the laser-based calibration of the sensor. Finally, the impact of the new full-aperture laser-based calibration approach on the on-orbit performance of the sensor is considered. PMID:26836861

  5. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Ainsworth, Ewa J.; Barnes, Robert A.; Eplee, Robert E., Jr.; Patt, Frederick S.; Robinson, Wayne D.; Wang, Menghua; Bailey, Sean W.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of each reprocessing. The number of chapters (21) needed to document this extensive work in the SeaWiFS Postlaunch Technical Report Series requires three volumes. The chapters in Volumes 9, 10, and 11 are in a logical order sequencing through sensor calibration, atmospheric correction, masks and flags, product evaluations, and bio-optical algorithms. The first chapter of Volume 9 is an overview of the calibration and validation program, including a table of activities from the inception of the SeaWiFS Project. Chapter 2 describes the fine adjustments of sensor detector knee radiances, i.e., radiance levels where three of the four detectors in each SeaWiFS band saturate. Chapters 3 and 4 describe the analyses of the lunar and solar calibration time series, respectively, which are used to track the temporal changes in radiometric sensitivity in each band. Chapter 5 outlines the procedure used to adjust band 7 relative to band 8 to derive reasonable aerosol radiances in band 7 as compared to those in band 8 in the vicinity of Lanai, Hawaii, the vicarious calibration site. Chapter 6 presents the procedure used to estimate the vicarious calibration gain adjustment factors for bands 1-6 using the waterleaving radiances from the Marine Optical Buoy (MOBY) offshore of Lanai. Chapter 7 provides the adjustments to the coccolithophore flag algorithm which were required for improved performance over the prelaunch version. Chapter 8 is an overview of the numerous modifications to the atmospheric correction algorithm that have been implemented. Chapter 9 describes the methodology used to remove artifacts of sun glint contamination for portions of the imagery outside the sun glint mask. Finally, Chapter 10 explains a modification to the ozone interpolation method to account for actual time differences between the SeaWiFS and Total Ozone Mapping Spectrometer (TOMS) orbits.

  6. Generation of Multivariate Surface Weather Series with Use of the Stochastic Weather Generator Linked to Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Farda, A.; Huth, R.

    2012-12-01

    The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series and then modified (in case of simulations for the future climate) according to the GCM- or RCM-based climate change scenarios. The present contribution uses the parametric daily weather generator M&Rfi to follow two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate/CZ (v.2) Regional Climate Model at 25 km resolution. The WG parameters will be derived from the RCM-simulated surface weather series and compared to those derived from observational data in the Czech meteorological stations. The set of WG parameters will include selected statistics of the surface temperature and precipitation (characteristics of the mean, variability, interdiurnal variability and extremes). (2) Testing a potential of RCM output for calibration of the WG for the ungauged locations. The methodology being examined will consist in using the WG, whose parameters are interpolated from the surrounding stations and then corrected based on a RCM-simulated spatial variability. The quality of the weather series produced by the WG calibrated in this way will be assessed in terms of selected climatic characteristics focusing on extreme precipitation and temperature characteristics (including characteristics of dry/wet/hot/cold spells). Acknowledgements: The present experiment is made within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports) and VALUE (COST ES 1102 action).

  7. Calibration methodology application of kerma area product meters in situ: Preliminary results

    NASA Astrophysics Data System (ADS)

    Costa, N. A.; Potiens, M. P. A.

    2014-11-01

    The kerma-area product (KAP) is a useful quantity to establish the reference levels of conventional X-ray examinations. It can be obtained by measurements carried out with a KAP meter on a plane parallel transmission ionization chamber mounted on the X-ray system. A KAP meter can be calibrated in laboratory or in situ, where it is used. It is important to use one reference KAP meter in order to obtain reliable quantity of doses on the patient. The Patient Dose Calibrator (PDC) is a new equipment from Radcal that measures KAP. It was manufactured following the IEC 60580 recommendations, an international standard for KAP meters. This study had the aim to calibrate KAP meters using the PDC in situ. Previous studies and the quality control program of the PDC have shown that it has good function in characterization tests of dosimeters with ionization chamber and it also has low energy dependence. Three types of KAP meters were calibrated in four different diagnostic X-ray equipments. The voltages used in the two first calibrations were 50 kV, 70 kV, 100 kV and 120 kV. The other two used 50 kV, 70 kV and 90 kV. This was related to the equipments limitations. The field sizes used for the calibration were 10 cm, 20 cm and 30 cm. The calibrations were done in three different cities with the purpose to analyze the reproducibility of the PDC. The results gave the calibration coefficient for each KAP meter and showed that the PDC can be used as a reference instrument to calibrate clinical KAP meters.

  8. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  9. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  10. The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.

    PubMed

    Stransky, D; Bares, V; Fatka, P

    2007-01-01

    Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.

  11. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  12. Microbial signatures of oral dysbiosis, periodontitis and edentulism revealed by Gene Meter methodology.

    PubMed

    Hunter, M Colby; Pozhitkov, Alex E; Noble, Peter A

    2016-12-01

    Conceptual models suggest that certain microorganisms (e.g., the "red" complex) are indicative of a specific disease state (e.g., periodontitis); however, recent studies have questioned the validity of these models. Here, the abundances of 500+ microbial species were determined in 16 patients with clinical signs of one of the following oral conditions: periodontitis, established caries, edentulism, and oral health. Our goal was to determine if the abundances of certain microorganisms reflect dysbiosis or a specific clinical condition that could be used as a 'signature' for dental research. Microbial abundances were determined by the analysis of 138,718 calibrated probes using Gene Meter methodology. Each 16S rRNA gene was targeted by an average of 194 unique probes (n=25nt). The calibration involved diluting pooled gene target samples, hybridizing each dilution to a DNA microarray, and fitting the probe intensities to adsorption models. The fit of the model to the experimental data was used to assess individual and aggregate probe behavior; good fits (R 2 >0.90) were retained for back-calculating microbial abundances from patient samples. The abundance of a gene was determined from the median of all calibrated individual probes or from the calibrated abundance of all aggregated probes. With the exception of genes with low abundances (<2 arbitrary units), the abundances determined by the different calibrations were highly correlated (r~1.0). Seventeen genera were classified as 'signatures of dysbiosis' because they had significantly higher abundances in patients with periodontitis and edentulism when contrasted with health. Similarly, 13 genera were classified as 'signatures of periodontitis', and 14 genera were classified as 'signatures of edentulism'. The signatures could be used, individually or in combination, to assess the clinical status of a patient (e.g., evaluating treatments such as antibiotic therapies). Comparisons of the same patient samples revealed high false negatives (45%) for next-generation-sequencing results and low false positives (7%) for Gene Meter results. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Development and Validation of a Collocated Exposure Monitoring Methodology using Portable Air Monitors

    NASA Astrophysics Data System (ADS)

    Li, Z.; Che, W.; Frey, H. C.; Lau, A. K. H.

    2016-12-01

    Portable air monitors are currently being developed and used to enable a move towards exposure monitoring as opposed to fixed site monitoring. Reliable methods are needed regarding capturing spatial and temporal variability in exposure concentration to obtain credible data from which to develop efficient exposure mitigation measures. However, there are few studies that quantify the validity and repeatability of the collected data. The objective of this study is to present and evaluate a collocated exposure monitoring (CEM) methodology including the calibration of portable air monitors against stationary reference equipment, side-by-side comparison of portable air monitors, personal or microenvironmental exposure monitoring and the processing and interpretation of the collected data. The CEM methodology was evaluated based on application to portable monitors TSI DustTrak II Aerosol Monitor 8530 for fine particulate matter (PM2.5) and TSI Q-Trak model 7575 with probe model 982 for CO, CO2, temperature and relative humidity. Taking a school sampling campaign in Hong Kong in January and June, 2015 as an example, the calibrated side-by-side measured 1 Hz PM2.5 concentrations showed good consistency between two sets of portable air monitors. Confidence in side-by-side comparison, PM2.5 concentrations of which most of the time were within 2 percent, enabled robust inference regarding differences when the monitors measured in classroom and pedestrian during school hour. The proposed CEM methodology can be widely applied in sampling campaigns with the objective of simultaneously characterizing pollutant concentrations in two or more locations or microenvironments. The further application of the CEM methodology to transportation exposure will be presented and discussed.

  14. Mortality prediction using TRISS methodology in the Spanish ICU Trauma Registry (RETRAUCI).

    PubMed

    Chico-Fernández, M; Llompart-Pou, J A; Sánchez-Casado, M; Alberdi-Odriozola, F; Guerrero-López, F; Mayor-García, M D; Egea-Guerrero, J J; Fernández-Ortega, J F; Bueno-González, A; González-Robledo, J; Servià-Goixart, L; Roldán-Ramírez, J; Ballesteros-Sanz, M Á; Tejerina-Alvarez, E; Pino-Sánchez, F I; Homar-Ramírez, J

    2016-10-01

    To validate Trauma and Injury Severity Score (TRISS) methodology as an auditing tool in the Spanish ICU Trauma Registry (RETRAUCI). A prospective, multicenter registry evaluation was carried out. Thirteen Spanish Intensive Care Units (ICUs). Individuals with traumatic disease and available data admitted to the participating ICUs. Predicted mortality using TRISS methodology was compared with that observed in the pilot phase of the RETRAUCI from November 2012 to January 2015. Discrimination was evaluated using receiver operating characteristic (ROC) curves and the corresponding areas under the curves (AUCs) (95% CI), with calibration using the Hosmer-Lemeshow (HL) goodness-of-fit test. A value of p<0.05 was considered significant. Predicted and observed mortality. A total of 1405 patients were analyzed. The observed mortality rate was 18% (253 patients), while the predicted mortality rate was 16.9%. The area under the ROC curve was 0.889 (95% CI: 0.867-0.911). Patients with blunt trauma (n=1305) had an area under the ROC curve of 0.887 (95% CI: 0.864-0.910), and those with penetrating trauma (n=100) presented an area under the curve of 0.919 (95% CI: 0.859-0.979). In the global sample, the HL test yielded a value of 25.38 (p=0.001): 27.35 (p<0.0001) in blunt trauma and 5.91 (p=0.658) in penetrating trauma. TRISS methodology underestimated mortality in patients with low predicted mortality and overestimated mortality in patients with high predicted mortality. TRISS methodology in the evaluation of severe trauma in Spanish ICUs showed good discrimination, with inadequate calibration - particularly in blunt trauma. Copyright © 2015 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  15. Simultaneous determination of specific alpha and beta emitters by LSC-PLS in water samples.

    PubMed

    Fons-Castells, J; Tent-Petrus, J; Llauradó, M

    2017-01-01

    Liquid scintillation counting (LSC) is a commonly used technique for the determination of alpha and beta emitters. However, LSC has poor resolution and the continuous spectra for beta emitters hinder the simultaneous determination of several alpha and beta emitters from the same spectrum. In this paper, the feasibility of multivariate calibration by partial least squares (PLS) models for the determination of several alpha ( nat U, 241 Am and 226 Ra) and beta emitters ( 40 K, 60 Co, 90 Sr/ 90 Y, 134 Cs and 137 Cs) in water samples is reported. A set of alpha and beta spectra from radionuclide calibration standards were used to construct three PLS models. Experimentally mixed radionuclides and intercomparision materials were used to validate the models. The results had a maximum relative bias of 25% when all the radionuclides in the sample were included in the calibration set; otherwise the relative bias was over 100% for some radionuclides. The results obtained show that LSC-PLS is a useful approach for the simultaneous determination of alpha and beta emitters in multi-radionuclide samples. However, to obtain useful results, it is important to include all the radionuclides expected in the studied scenario in the calibration set. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Spectrophotometric determination of ternary mixtures of thiamin, riboflavin and pyridoxal in pharmaceutical and human plasma by least-squares support vector machines.

    PubMed

    Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie

    2007-11-01

    Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.

  17. Development and external multicenter validation of Chinese Prostate Cancer Consortium prostate cancer risk calculator for initial prostate biopsy.

    PubMed

    Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao

    2016-09-01

    Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Measurement Musings.

    ERIC Educational Resources Information Center

    Fisher, William P., Jr.; Choi, Ellie; Fisher, William P.; Stenner, A. Jackson; Horabin, Ivan; Wright, Benjamin D.

    1998-01-01

    Comments on measurement aspects are presented in discussions of (1) methodology and morality (W. P. Fisher); (2) Rasch measurement (E. Choi); (3) novel wisdom of the Rasch approach (W. P. Fisher); (4) development of construct definition and calibration (A. J. Stenner and I. Horabin); and (5) origin of dimensions (B. D. Wright). (SLD)

  19. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    PubMed

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  20. Calibration of LR-115 for 222Rn monitoring taking into account the plateout effect.

    PubMed

    Da Silva, A A R; Yoshimura, E M

    2003-01-01

    The dose received by people exposed to indoor radon is mainly due to radon progeny. This fact points to the establishment of techniques that access either radon and progeny together, or only radon progeny concentration. In this work a low cost and easy to use methodology is presented to determine the total indoor alpha emission concentration. It is based on passive detection using LR-115 and CR-39 detectors, taking into account the plateout effect. A calibration of LR-115 track density response was done by indoor exposure in controlled environments and dwellings, places where 222Rn and progeny concentration were measured with CR-39. The calibration factor obtained showed great dependence on the ambient condition: (0.69 +/- 0.04) cm for controlled environments and (0.43 +/- 0.03) cm for dwellings.

  1. The use of experimental design to find the operating maximum power point of PEM fuel cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria

    2015-03-10

    Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.

  2. Rapid quantification of multi-components in alcohol precipitation liquid of Codonopsis Radix using near infrared spectroscopy (NIRS).

    PubMed

    Luo, Yu; Li, Wen-Long; Huang, Wen-Hua; Liu, Xue-Hua; Song, Yan-Gang; Qu, Hai-Bin

    2017-05-01

    A near infrared spectroscopy (NIRS) approach was established for quality control of the alcohol precipitation liquid in the manufacture of Codonopsis Radix. By applying NIRS with multivariate analysis, it was possible to build variation into the calibration sample set, and the Plackett-Burman design, Box-Behnken design, and a concentrating-diluting method were used to obtain the sample set covered with sufficient fluctuation of process parameters and extended concentration information. NIR data were calibrated to predict the four quality indicators using partial least squares regression (PLSR). In the four calibration models, the root mean squares errors of prediction (RMSEPs) were 1.22 μg/ml, 10.5 μg/ml, 1.43 μg/ml, and 0.433% for lobetyolin, total flavonoids, pigments, and total solid contents, respectively. The results indicated that multi-components quantification of the alcohol precipitation liquid of Codonopsis Radix could be achieved with an NIRS-based method, which offers a useful tool for real-time release testing (RTRT) of intermediates in the manufacture of Codonopsis Radix.

  3. International documentary standards and comparison of national physical measurement standards for the calibration of accelerometers

    NASA Astrophysics Data System (ADS)

    Evans, David J.

    2002-11-01

    The documentary standards defining internationally adopted methodologies and protocols for calibrating transducers used to measure vibration are currently developed under the International Organization for Standardization (ISO) Technical Committee 108 Sub Committee 3 (Use and calibration of vibration and shock measuring instruments). Recent revisions of the documentary standards on primary methods for the calibration of accelerometers used to measure rectilinear motion have been completed. These standards can be, and have been, used as references in the technical protocols of key international and regional comparisons between National Measurement Institutes (NMIs) on the calibration of accelerometers. These key comparisons are occurring in part as a result of the creation of the Mutual Recognition Arrangement between NMIs which has appendices that document the uncertainties, and the comparisons completed in support of the uncertainties, claimed by the National Laboratories that are signatories of the MRA. The measurements for the first international and the first Interamerican System of Metrology (SIM) regional key comparisons in vibration have been completed. These intercomparisons were promulgated via the relatively new Consultative Committee for Acoustics, Ultrasound and Vibration (CCAUV) of the International Committee for Weights and Measures (CIPM) and SIM Metrology Working Group (MWG) 9, respectively.

  4. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  5. The ethics of in vivo calibrations in oral health surveys.

    PubMed

    Andrade, Flávia Reis de; Narvai, Paulo Capel; Montagner, Miguel Ângelo

    2016-01-01

    To analyze the ethics of in vivo calibration, using the discourse of the administrators of the National Oral Health Survey (SBBrasil 2010) as a starting point. This is a qualitative research involving semi-structured individual interviews with 12 members of the Steering Group and Technical Advisory Committee of the Ministry of Health, and two coordinators, one State and the other Municipal. The discourse of the collective subject technique was used for data analysis. When asked about the experiences of SBBrasil 2010, which included ethical aspects, respondents identified the forms of standardization and training of teams who collected field data. For them, there is little scientific evidence to ethically support the way the training stage, including calibration, is carried out in oral health epidemiological surveys, as a certain unease can be predicted in participants of these studies. The ethics of a research also derives from its methodological rigor; the training process; and calibration in particular, is a fundamental technical and ethical requirement in surveys such as the SBBrasil 2010. The unease of the volunteers in face of test repetition does not ethically invalidate the in vivo calibration, but mechanisms to minimize it must be developed.

  6. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  7. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  8. Space based optical staring sensor LOS determination and calibration using GCPs observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; An, Wei; Deng, Xinpu; Yang, Jungang; Sha, Zhichao

    2016-10-01

    Line of sight (LOS) attitude determination and calibration is the key prerequisite of tracking and location of targets in space based infrared (IR) surveillance systems (SBIRS) and the LOS determination and calibration of staring sensor is one of the difficulties. This paper provides a novel methodology for removing staring sensor bias through the use of Ground Control Points (GCPs) detected in the background field of the sensor. Based on researching the imaging model and characteristics of the staring sensor of SBIRS geostationary earth orbit part (GEO), the real time LOS attitude determination and calibration algorithm using landmark control point is proposed. The influential factors (including the thermal distortions error, assemble error, and so on) of staring sensor LOS attitude error are equivalent to bias angle of LOS attitude. By establishing the observation equation of GCPs and the state transition equation of bias angle, and using an extend Kalman filter (EKF), the real time estimation of bias angle and the high precision sensor LOS attitude determination and calibration are achieved. The simulation results show that the precision and timeliness of the proposed algorithm meet the request of target tracking and location process in space based infrared surveillance system.

  9. ICESAT Laser Altimeter Pointing, Ranging and Timing Calibration from Integrated Residual Analysis

    NASA Technical Reports Server (NTRS)

    Luthcke, Scott B.; Rowlands, D. D.; Carabajal, C. C.; Harding, D. H.; Bufton, J. L.; Williams, T. A.

    2003-01-01

    On January 12, 2003 the Ice, Cloud and land Elevation Satellite (ICESat) was successfully placed into orbit. The ICESat mission carries the Geoscience Laser Altimeter System (GLAS), which has a primary measurement of short-pulse laser- ranging to the Earth s surface at 1064nm wavelength at a rate of 40 pulses per second. The instrument has collected precise elevation measurements of the ice sheets, sea ice roughness and thickness, ocean and land surface elevations and surface reflectivity. The accurate geolocation of GLAS s surface returns, the spots from which the laser energy reflects on the Earth s surface, is a critical issue in the scientific application of these data. Pointing, ranging, timing and orbit errors must be compensated to accurately geolocate the laser altimeter surface returns. Towards this end, the laser range observations can be fully exploited in an integrated residual analysis to accurately calibrate these geolocation/instrument parameters. ICESat laser altimeter data have been simultaneously processed as direct altimetry from ocean sweeps along with dynamic crossovers in order to calibrate pointing, ranging and timing. The calibration methodology and current calibration results are discussed along with future efforts.

  10. ICESat Laser Altimeter Pointing, Ranging and Timing Calibration from Integrated Residual Analysis: A Summary of Early Mission Results

    NASA Technical Reports Server (NTRS)

    Lutchke, Scott B.; Rowlands, David D.; Harding, David J.; Bufton, Jack L.; Carabajal, Claudia C.; Williams, Teresa A.

    2003-01-01

    On January 12, 2003 the Ice, Cloud and land Elevation Satellite (ICESat) was successfUlly placed into orbit. The ICESat mission carries the Geoscience Laser Altimeter System (GLAS), which consists of three near-infrared lasers that operate at 40 short pulses per second. The instrument has collected precise elevation measurements of the ice sheets, sea ice roughness and thickness, ocean and land surface elevations and surface reflectivity. The accurate geolocation of GLAS's surface returns, the spots from which the laser energy reflects on the Earth's surface, is a critical issue in the scientific application of these data Pointing, ranging, timing and orbit errors must be compensated to accurately geolocate the laser altimeter surface returns. Towards this end, the laser range observations can be fully exploited in an integrated residual analysis to accurately calibrate these geolocation/instrument parameters. Early mission ICESat data have been simultaneously processed as direct altimetry from ocean sweeps along with dynamic crossovers resulting in a preliminary calibration of laser pointing, ranging and timing. The calibration methodology and early mission analysis results are summarized in this paper along with future calibration activities

  11. Calibration of Multiple In Silico Tools for Predicting Pathogenicity of Mismatch Repair Gene Missense Substitutions

    PubMed Central

    Thompson, Bryony A.; Greenblatt, Marc S.; Vallee, Maxime P.; Herkert, Johanna C.; Tessereau, Chloe; Young, Erin L.; Adzhubey, Ivan A.; Li, Biao; Bell, Russell; Feng, Bingjian; Mooney, Sean D.; Radivojac, Predrag; Sunyaev, Shamil R.; Frebourg, Thierry; Hofstra, Robert M.W.; Sijmons, Rolf H.; Boucher, Ken; Thomas, Alun; Goldgar, David E.; Spurdle, Amanda B.; Tavtigian, Sean V.

    2015-01-01

    Classification of rare missense substitutions observed during genetic testing for patient management is a considerable problem in clinical genetics. The Bayesian integrated evaluation of unclassified variants is a solution originally developed for BRCA1/2. Here, we take a step toward an analogous system for the mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2) that confer colon cancer susceptibility in Lynch syndrome by calibrating in silico tools to estimate prior probabilities of pathogenicity for MMR gene missense substitutions. A qualitative five-class classification system was developed and applied to 143 MMR missense variants. This identified 74 missense substitutions suitable for calibration. These substitutions were scored using six different in silico tools (Align-Grantham Variation Grantham Deviation, multivariate analysis of protein polymorphisms [MAPP], Mut-Pred, PolyPhen-2.1, Sorting Intolerant From Tolerant, and Xvar), using curated MMR multiple sequence alignments where possible. The output from each tool was calibrated by regression against the classifications of the 74 missense substitutions; these calibrated outputs are interpretable as prior probabilities of pathogenicity. MAPP was the most accurate tool and MAPP + PolyPhen-2.1 provided the best-combined model (R2 = 0.62 and area under receiver operating characteristic = 0.93). The MAPP + PolyPhen-2.1 output is sufficiently predictive to feed as a continuous variable into the quantitative Bayesian integrated evaluation for clinical classification of MMR gene missense substitutions. PMID:22949387

  12. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  13. In situ continuous monitoring of nitrogen with ion-selective electrodes in a constructed wetland receiving treated wastewater: an operating protocol to obtain reliable data.

    PubMed

    Papias, Sandrine; Masson, Matthieu; Pelletant, Sébastien; Prost-Boucle, Stéphanie; Boutin, Catherine

    2018-03-01

    Constructed wetlands receiving treated wastewater (CWtw) are placed between wastewater treatment plants and receiving water bodies, under the perception that they increase water quality. A better understanding of the CWtw functioning is required to evaluate their real performance. To achieve this, in situ continuous monitoring of nitrate and ammonium concentrations with ion-selective electrodes (ISEs) can provide valuable information. However, this measurement needs precautions to be taken to produce good data quality, especially in areas with high effluent quality requirements. In order to study the functioning of a CWtw instrumented with six ISE probes, we have developed an appropriate methodology for probe management and data processing. It is based on an evaluation of performance in the laboratory and an adapted field protocol for calibration, data treatment and validation. The result is an operating protocol concerning an acceptable cleaning frequency of 2 weeks, a complementary calibration using CWtw water, a drift evaluation and the determination of limits of quantification (1 mgN/L for ammonium and 0.5 mgN/L for nitrate). An example of a 9-month validated dataset confirms that it is fundamental to include the technical limitations of the measuring equipment and set appropriate maintenance and calibration methodologies in order to ensure an accurate interpretation of data.

  14. Elemental analysis of soils using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS) with multivariate discrimination: tape mounting as an alternative to pellets for small forensic transfer specimens.

    PubMed

    Jantzi, Sarah C; Almirall, José R

    2014-01-01

    Elemental analysis of soil is a useful application of both laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS) in geological, agricultural, environmental, archeological, planetary, and forensic sciences. In forensic science, the question to be answered is often whether soil specimens found on objects (e.g., shoes, tires, or tools) originated from the crime scene or other location of interest. Elemental analysis of the soil from the object and the locations of interest results in a characteristic elemental profile of each specimen, consisting of the amount of each element present. Because multiple elements are measured, multivariate statistics can be used to compare the elemental profiles in order to determine whether the specimen from the object is similar to one of the locations of interest. Previous work involved milling and pressing 0.5 g of soil into pellets before analysis using LA-ICP-MS and LIBS. However, forensic examiners prefer techniques that require smaller samples, are less time consuming, and are less destructive, allowing for future analysis by other techniques. An alternative sample introduction method was developed to meet these needs while still providing quantitative results suitable for multivariate comparisons. The tape-mounting method involved deposition of a thin layer of soil onto double-sided adhesive tape. A comparison of tape-mounting and pellet method performance is reported for both LA-ICP-MS and LIBS. Calibration standards and reference materials, prepared using the tape method, were analyzed by LA-ICP-MS and LIBS. As with the pellet method, linear calibration curves were achieved with the tape method, as well as good precision and low bias. Soil specimens from Miami-Dade County were prepared by both the pellet and tape methods and analyzed by LA-ICP-MS and LIBS. Principal components analysis and linear discriminant analysis were applied to the multivariate data. Results from both the tape method and the pellet method were nearly identical, with clear groupings and correct classification rates of >94%.

  15. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  16. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues.

    PubMed

    Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K

    2015-01-01

    Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  17. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  18. A method to calibrate channel friction and bathymetry parameters of a Sub-Grid hydraulic model using SAR flood images

    NASA Astrophysics Data System (ADS)

    Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.

    2015-12-01

    Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.

  19. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches†.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-03-22

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating in air comparing the FRF experimentally obtained with a calibrated exciter (impact Hammer) and the FRF obtained with the described method. Finally, the same methodology has been applied for the structure submerged and close to a rigid wall, where it is extremely important to not modify the boundary conditions for an accurate determination of the FRF. As experimentally shown in this paper, in such cases, the use of PZTs combined with the proposed methodology gives much more accurate estimations of the FRF than other calibrated exciters typically used for the same purpose. Therefore, the validated methodology proposed in this paper can be used to obtain the FRF of a generic submerged and confined structure, without a previous calibration of the PZT.

  20. Graduate Recruitment and Development: Sector Influence on a Local Market/Regional Economy

    ERIC Educational Resources Information Center

    Heaton, Norma; McCracken, Martin; Harrison, Jeanette

    2008-01-01

    Purpose: The aim of this article is to illustrate how employers have used more innovative "localised" strategies to address what appears to be "globalised" problems of attracting and retaining high calibre applicants with the appropriate "work ready" skills. Design/methodology/approach: A series of interviews were…

  1. Relative radiometric calibration of LANDSAT TM reflective bands

    NASA Technical Reports Server (NTRS)

    Barker, J. L.

    1984-01-01

    A common scientific methodology and terminology is outlined for characterizing the radiometry of both TM sensors. The magnitude of the most significant sources of radiometric variability are discussed and methods are recommended for achieving the exceptional potential inherent in the radiometric precision and accuracy of the TM sensors.

  2. Validation of SMAP surface soil moisture products with core validation sites

    USDA-ARS?s Scientific Manuscript database

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...

  3. Estimation of dose-response models for discrete and continuous data in weed science

    USDA-ARS?s Scientific Manuscript database

    Dose-response analysis is widely used in biological sciences and has application to a variety of risk assessment, bioassay, and calibration problems. In weed science, dose-response methodologies have typically relied on least squares estimation under an assumption of normality. Advances in computati...

  4. Risk prediction models for major adverse cardiac event (MACE) following percutaneous coronary intervention (PCI): A review

    NASA Astrophysics Data System (ADS)

    Manan, Norhafizah A.; Abidin, Basir

    2015-02-01

    Five percent of patients who went through Percutaneous Coronary Intervention (PCI) experienced Major Adverse Cardiac Events (MACE) after PCI procedure. Risk prediction of MACE following a PCI procedure therefore is helpful. This work describes a review of such prediction models currently in use. Literature search was done on PubMed and SCOPUS database. Thirty literatures were found but only 4 studies were chosen based on the data used, design, and outcome of the study. Particular emphasis was given and commented on the study design, population, sample size, modeling method, predictors, outcomes, discrimination and calibration of the model. All the models had acceptable discrimination ability (C-statistics >0.7) and good calibration (Hosmer-Lameshow P-value >0.05). Most common model used was multivariate logistic regression and most popular predictor was age.

  5. Dixie Valley Engineered Geothermal System Exploration Methodology Project, Baseline Conceptual Model Report

    DOE Data Explorer

    Iovenitti, Joe

    2014-01-02

    The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodology calibration purposes because, in the public domain, it is a highly characterized geothermal system in the Basin and Range with a considerable amount of geoscience and most importantly, well data. The overall project area is 2500km2 with the Calibration Area (Dixie Valley Geothermal Wellfield) being about 170km2. The project was subdivided into five tasks (1) collect and assess the existing public domain geoscience data; (2) design and populate a GIS database; (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area at 0.5km intervals to identify EGS drilling targets at a scale of 5km x 5km; (4) collect new geophysical and geochemical data, and (5) repeat Task 3 for the enhanced (baseline + new ) data. Favorability maps were based on the integrated assessment of the three critical EGS exploration parameters of interest: rock type, temperature and stress. A complimentary trust map was generated to compliment the favorability maps to graphically illustrate the cumulative confidence in the data used in the favorability mapping. The Final Scientific Report (FSR) is submitted in two parts with Part I describing the results of project Tasks 1 through 3 and Part II covering the results of project Tasks 4 through 5 plus answering nine questions posed in the proposal for the overall project. FSR Part I presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region. FSR Part II presents (1) 278 new gravity stations; (2) enhanced gravity-magnetic modeling; (3) 42 new ambient seismic noise survey stations; (4) an integration of the new seismic noise data with a regional seismic network; (5) a new methodology and approach to interpret this data; (5) a novel method to predict rock type and temperature based on the newly interpreted data; (6) 70 new magnetotelluric (MT) stations; (7) an integrated interpretation of the enhanced MT data set; (8) the results of a 308 station soil CO2 gas survey; (9) new conductive thermal modeling in the project area; (10) new convective modeling in the Calibration Area; (11) pseudo-convective modeling in the Calibration Area; (12) enhanced data implications and qualitative geoscience correlations at three scales (a) Regional, (b) Project, and (c) Calibration Area; (13) quantitative geostatistical exploratory data analysis; and (14) responses to nine questions posed in the proposal for this investigation. Enhanced favorability/trust maps were not generated because there was not a sufficient amount of new, fully-vetted (see below) rock type, temperature, and stress data. The enhanced seismic data did generate a new method to infer rock type and temperature. However, in the opinion of the Principal Investigator for this project, this new methodology needs to be tested and evaluated at other sites in the Basin and Range before it is used to generate the referenced maps. As in the baseline conceptual model, the enhanced findings can be applied to both the hydrothermal system and EGS in the Dixie Valley region.

  6. Fluorescence of the Flavin group in choline oxidase. Insights and analytical applications for the determination of choline and betaine aldehyde.

    PubMed

    Ortega, E; de Marcos, S; Sanz-Vicente, I; Ubide, C; Ostra, M; Vidal, M; Galbán, J

    2016-01-15

    Choline oxidase (ChOx) is a flavoenzyme catalysing the oxidation of choline (Ch) to betaine aldehyde (BA) and glycine betaine (GB). In this paper a fundamental study of the intrinsic fluorescence properties of ChOx due to Flavin Adenine Dinucleotide (FAD) is presented and some analytical applications are studied in detail. Firstly, an unusual alteration in the excitation spectra, in comparison with the absorption spectra, has been observed as a function of the pH. This is ascribed to a change of polarity in the excited state. Secondly, the evolution of the fluorescence spectra during the reaction seems to indicate that the reaction takes place in two consecutive, but partially overlapped, steps and each of them follows a different mechanism. Thirdly, the chemical system can be used to determine the Ch concentration in the range from 5×10(-6)M to 5×10(-5)M (univariate and multivariate calibration) in the presence of BA as interference, and the joint Ch+BA concentration in the range 5×10(-6)-5×10(-4)M (multivariate calibration) with mean errors under 10%; a semiquantitative determination of the BA concentration can be deduced by difference. Finally, Ch has been successfully determined in an infant milk sample. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Laser-Induced Breakdown Spectroscopy (LIBS) Measurement of Uranium in Molten Salt.

    PubMed

    Williams, Ammon; Phongikaroon, Supathorn

    2018-01-01

    In this current study, the molten salt aerosol-laser-induced breakdown spectroscopy (LIBS) system was used to measure the uranium (U) content in a ternary UCl 3 -LiCl-KCl salt to investigate and assess a near real-time analytical approach for material safeguards and accountability. Experiments were conducted using five different U concentrations to determine the analytical figures of merit for the system with respect to U. In the analysis, three U lines were used to develop univariate calibration curves at the 367.01 nm, 385.96 nm, and 387.10 nm lines. The 367.01 nm line had the lowest limit of detection (LOD) of 0.065 wt% U. The 385.96 nm line had the best root mean square error of cross-validation (RMSECV) of 0.20 wt% U. In addition to the univariate calibration approach, a multivariate partial least squares (PLS) model was developed to further analyze the data. Using partial least squares (PLS) modeling, an RMSECV of 0.085 wt% U was determined. The RMSECV from the multivariate approach was significantly better than the univariate case and the PLS model is recommended for future LIBS analysis. Overall, the aerosol-LIBS system performed well in monitoring the U concentration and it is expected that the system could be used to quantitatively determine the U compositions within the normal operational concentrations of U in pyroprocessing molten salts.

  8. Determination of Leaf Water Content by Visible and Near-Infrared Spectrometry and Multivariate Calibration in Miscanthus

    DOE PAGES

    Jin, Xiaoli; Shi, Chunhai; Yu, Chang Yeon; ...

    2017-05-19

    Leaf water content is one of the most common physiological parameters limiting efficiency of photosynthesis and biomass productivity in plants including Miscanthus. Therefore, it is of great significance to determine or predict the water content quickly and non-destructively. In this study, we explored the relationship between leaf water content and diffuse reflectance spectra in Miscanthus. Three multivariate calibrations including partial least squares (PLS), least squares support vector machine regression (LSSVR), and radial basis function (RBF) neural network (NN) were developed for the models of leaf water content determination. The non-linear models including RBF_LSSVR and RBF_NN showed higher accuracy than themore » PLS and Lin_LSSVR models. Moreover, 75 sensitive wavelengths were identified to be closely associated with the leaf water content in Miscanthus. The RBF_LSSVR and RBF_NN models for predicting leaf water content, based on 75 characteristic wavelengths, obtained the high determination coefficients of 0.9838 and 0.9899, respectively. The results indicated the non-linear models were more accurate than the linear models using both wavelength intervals. These results demonstrated that visible and near-infrared (VIS/NIR) spectroscopy combined with RBF_LSSVR or RBF_NN is a useful, non-destructive tool for determinations of the leaf water content in Miscanthus, and thus very helpful for development of drought-resistant varieties in Miscanthus.« less

  9. Determination of Leaf Water Content by Visible and Near-Infrared Spectrometry and Multivariate Calibration in Miscanthus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xiaoli; Shi, Chunhai; Yu, Chang Yeon

    Leaf water content is one of the most common physiological parameters limiting efficiency of photosynthesis and biomass productivity in plants including Miscanthus. Therefore, it is of great significance to determine or predict the water content quickly and non-destructively. In this study, we explored the relationship between leaf water content and diffuse reflectance spectra in Miscanthus. Three multivariate calibrations including partial least squares (PLS), least squares support vector machine regression (LSSVR), and radial basis function (RBF) neural network (NN) were developed for the models of leaf water content determination. The non-linear models including RBF_LSSVR and RBF_NN showed higher accuracy than themore » PLS and Lin_LSSVR models. Moreover, 75 sensitive wavelengths were identified to be closely associated with the leaf water content in Miscanthus. The RBF_LSSVR and RBF_NN models for predicting leaf water content, based on 75 characteristic wavelengths, obtained the high determination coefficients of 0.9838 and 0.9899, respectively. The results indicated the non-linear models were more accurate than the linear models using both wavelength intervals. These results demonstrated that visible and near-infrared (VIS/NIR) spectroscopy combined with RBF_LSSVR or RBF_NN is a useful, non-destructive tool for determinations of the leaf water content in Miscanthus, and thus very helpful for development of drought-resistant varieties in Miscanthus.« less

  10. Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra

    NASA Astrophysics Data System (ADS)

    Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong

    2017-08-01

    Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.

  11. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  12. Nearest neighbors by neighborhood counting.

    PubMed

    Wang, Hui

    2006-06-01

    Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it will work for other types of data.

  13. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  14. Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, K.E.; Smith, S.K.; Gailey, S.

    2012-07-01

    The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less

  15. Calibration Challenges and Improvements for Terra and Aqua MODIS Level-1B Data Product Qualit

    NASA Astrophysics Data System (ADS)

    Xiong, X.; Angal, A.; Chen, H.; Geng, X.; Li, Y.; Link, D.; Salomonson, V.; Twedt, K.; Wang, Z.; Wilson, T.; Wu, A.

    2017-12-01

    Terra and Aqua MODIS instruments launched in 1999 and 2002, respectively, have provided the remote sensing community and users worldwide a series of high quality long-term data records. They have enabled a broad range of scientific studies of the Earth's system and changes in its key geophysical and environmental parameters. To date, both MODIS instruments continue to operate nominally with all on-board calibrators (OBC) functioning properly. MODIS reflective solar bands (RSB) are currently calibrated by a solar diffuser (SD) and solar diffuser stability monitor (SDSM) system, coupled with regularly scheduled lunar observations and trending results from selected ground reference targets. The thermal emissive bands (TEB) calibration is performed using an on-board blackbody (BB) on a scan-by-scan basis. The sensor's spectral and spatial characteristics are periodically tracked by the on-board spectroradiometric calibration assembly (SRCA). On-orbit changes in sensor responses or performance characteristics, often in non-deterministic ways, underscore the need for dedicated calibration efforts to be made over the course of the sensor's entire mission. For MODIS instruments, this task has been undertaken by the MODIS Characterization Support Team (MCST). In this presentation, we provide an overview of MODIS instrument operation and calibration activities with a focus on recent challenging issues. We describe the efforts made and methodologies developed to address various challenging issues, including on-orbit characterization of sensor response versus scan angle (RVS) and polarization sensitives in the reflective solar spectral region, and electronic crosstalk impact on sensor calibration. Also discussed are the latest improvements made into the MODIS Level-1B data products as well as lessons that could benefit other instruments (e.g. VIIRS) for their on-orbit calibration and characterization.

  16. On the Long-Term Stability of Microwave Radiometers Using Noise Diodes for Calibration

    NASA Technical Reports Server (NTRS)

    Brown, Shannon T.; Desai, Shailen; Lu, Wenwen; Tanner, Alan B.

    2007-01-01

    Results are presented from the long-term monitoring and calibration of the National Aeronautics and Space Administration Jason Microwave Radiometer (JMR) on the Jason-1 ocean altimetry satellite and the ground-based Advanced Water Vapor Radiometers (AWVRs) developed for the Cassini Gravity Wave Experiment. Both radiometers retrieve the wet tropospheric path delay (PD) of the atmosphere and use internal noise diodes (NDs) for gain calibration. The JMR is the first radiometer to be flown in space that uses NDs for calibration. External calibration techniques are used to derive a time series of ND brightness for both instruments that is greater than four years. For the JMR, an optimal estimator is used to find the set of calibration coefficients that minimize the root-mean-square difference between the JMR brightness temperatures and the on-Earth hot and cold references. For the AWVR, continuous tip curves are used to derive the ND brightness. For the JMR and AWVR, both of which contain three redundant NDs per channel, it was observed that some NDs were very stable, whereas others experienced jumps and drifts in their effective brightness. Over the four-year time period, the ND stability ranged from 0.2% to 3% among the diodes for both instruments. The presented recalibration methodology demonstrates that long-term calibration stability can be achieved with frequent recalibration of the diodes using external calibration techniques. The JMR PD drift compared to ground truth over the four years since the launch was reduced from 3.9 to - 0.01 mm/year with the recalibrated ND time series. The JMR brightness temperature calibration stability is estimated to be 0.25 K over ten days.

  17. Biases in Multicenter Longitudinal PET Standardized Uptake Value Measurements1

    PubMed Central

    Doot, Robert K; Pierce, Larry A; Byrd, Darrin; Elston, Brian; Allberg, Keith C; Kinahan, Paul E

    2014-01-01

    This study investigates measurement biases in longitudinal positron-emission tomography/computed tomography (PET/CT) studies that are due to instrumentation variability including human error. Improved estimation of variability between patient scans is of particular importance for assessing response to therapy and multicenter trials. We used National Institute of Standards and Technology-traceable calibration methodology for solid germanium-68/gallium-68 (68Ge/68Ga) sources used as surrogates for fluorine-18 (18F) in radionuclide activity calibrators. One cross-calibration kit was constructed for both dose calibrators and PET scanners using the same 9-month half-life batch of 68Ge/68Ga in epoxy. Repeat measurements occurred in a local network of PET imaging sites to assess standardized uptake value (SUV) errors over time for six dose calibrators from two major manufacturers and for six PET/CT scanners from three major manufacturers. Bias in activity measures by dose calibrators ranged from -50% to 9% and was relatively stable over time except at one site that modified settings between measurements. Bias in activity concentration measures by PET scanners ranged from -27% to 13% with a median of 174 days between the six repeat scans (range, 29 to 226 days). Corresponding errors in SUV measurements ranged from -20% to 47%. SUV biases were not stable over time with longitudinal differences for individual scanners ranging from -11% to 59%. Bias in SUV measurements varied over time and between scanner sites. These results suggest that attention should be paid to PET scanner calibration for longitudinal studies and use of dose calibrator and scanner cross-calibration kits could be helpful for quality assurance and control. PMID:24772207

  18. Calibration of multivariate scatter plots for exploratory analysis of relations within and between sets of variables in genomic research.

    PubMed

    Graffelman, Jan; van Eeuwijk, Fred

    2005-12-01

    The scatter plot is a well known and easily applicable graphical tool to explore relationships between two quantitative variables. For the exploration of relations between multiple variables, generalisations of the scatter plot are useful. We present an overview of multivariate scatter plots focussing on the following situations. Firstly, we look at a scatter plot for portraying relations between quantitative variables within one data matrix. Secondly, we discuss a similar plot for the case of qualitative variables. Thirdly, we describe scatter plots for the relationships between two sets of variables where we focus on correlations. Finally, we treat plots of the relationships between multiple response and predictor variables, focussing on the matrix of regression coefficients. We will present both known and new results, where an important original contribution concerns a procedure for the inclusion of scales for the variables in multivariate scatter plots. We provide software for drawing such scales. We illustrate the construction and interpretation of the plots by means of examples on data collected in a genomic research program on taste in tomato.

  19. A Comparison of Multivariate and Pre-Processing Methods for Quantitative Laser-Induced Breakdown Spectroscopy of Geologic Samples

    NASA Technical Reports Server (NTRS)

    Anderson, R. B.; Morris, R. V.; Clegg, S. M.; Bell, J. F., III; Humphries, S. D.; Wiens, R. C.

    2011-01-01

    The ChemCam instrument selected for the Curiosity rover is capable of remote laser-induced breakdown spectroscopy (LIBS).[1] We used a remote LIBS instrument similar to ChemCam to analyze 197 geologic slab samples and 32 pressed-powder geostandards. The slab samples are well-characterized and have been used to validate the calibration of previous instruments on Mars missions, including CRISM [2], OMEGA [3], the MER Pancam [4], Mini-TES [5], and Moessbauer [6] instruments and the Phoenix SSI [7]. The resulting dataset was used to compare multivariate methods for quantitative LIBS and to determine the effect of grain size on calculations. Three multivariate methods - partial least squares (PLS), multilayer perceptron artificial neural networks (MLP ANNs) and cascade correlation (CC) ANNs - were used to generate models and extract the quantitative composition of unknown samples. PLS can be used to predict one element (PLS1) or multiple elements (PLS2) at a time, as can the neural network methods. Although MLP and CC ANNs were successful in some cases, PLS generally produced the most accurate and precise results.

  20. Gain-scheduling multivariable LPV control of an irrigation canal system.

    PubMed

    Bolea, Yolanda; Puig, Vicenç

    2016-07-01

    The purpose of this paper is to present a multivariable linear parameter varying (LPV) controller with a gain scheduling Smith Predictor (SP) scheme applicable to open-flow canal systems. This LPV controller based on SP is designed taking into account the uncertainty in the estimation of delay and the variation of plant parameters according to the operating point. This new methodology can be applied to a class of delay systems that can be represented by a set of models that can be factorized into a rational multivariable model in series with left/right diagonal (multiple) delays, such as, the case of irrigation canals. A multiple pool canal system is used to test and validate the proposed control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  2. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  3. State Pupil Transportation Funding: Equity and Efficiency.

    ERIC Educational Resources Information Center

    Zeitlin, Laurie S.

    1990-01-01

    Explores the influences state departments of education have on the cost and quality of pupil transportation. Evaluates the following state funding methodologies: (1) actual costs incurred; (2) a flat rate per unit; or (3) a multivariate calculation in providing service efficiently and equitably between districts. (MLF)

  4. Association between bibliometric parameters, reporting and methodological quality of randomised controlled trials in vascular and endovascular surgery.

    PubMed

    Hajibandeh, Shahab; Hajibandeh, Shahin; Antoniou, George A; Green, Patrick A; Maden, Michelle; Torella, Francesco

    2017-04-01

    Purpose We aimed to investigate association between bibliometric parameters, reporting and methodological quality of vascular and endovascular surgery randomised controlled trials. Methods The most recent 75 and oldest 75 randomised controlled trials published in leading journals over a 10-year period were identified. The reporting quality was analysed using the CONSORT statement, and methodological quality with the Intercollegiate Guidelines Network checklist. We used exploratory univariate and multivariable linear regression analysis to investigate associations. Findings Bibliometric parameters such as type of journal, study design reported in title, number of pages; external funding, industry sponsoring and number of citations are associated with reporting quality. Moreover, parameters such as type of journal, subject area and study design reported in title are associated with methodological quality. Conclusions The bibliometric parameters of randomised controlled trials may be independent predictors for their reporting and methodological quality. Moreover, the reporting quality of randomised controlled trials is associated with their methodological quality and vice versa.

  5. Effects of Contamination Upon the Performance of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen L.; Elsner, Ronald F.; Oosterbroek, Tim

    2010-01-01

    Particulate and molecular contamination can each impact the performance of x-ray telescope systems. Furthermore, any changes in the level of contamination between on-ground calibration and in-space operation can compromise the validity of the calibration. Thus, it is important to understand the sensitivity of telescope performance, especially the net effective area and the wings of the point spread function to contamination. Here, we quantify this sensitivity and discuss the flow-down of science requirements to contamination-control requirements. As an example, we apply this methodology to the International X-ray Observatory (IXO), currently under joint study by ESA, JAXA, and NASA.

  6. Effects of contamination upon the performance of x-ray telescopes

    NASA Astrophysics Data System (ADS)

    O'Dell, Stephen L.; Elsner, Ronald F.; Oosterbroek, Tim

    2010-07-01

    Particulate and molecular contamination can each impact the performance of x-ray telescope systems. Furthermore, any changes in the level of contamination between on-ground calibration and in-space operation can compromise the validity of the calibration. Thus, it is important to understand the sensitivity of telescope performance---especially the net effective area and the wings of the point spread function---to contamination. Here, we quantify this sensitivity and discuss the flow-down of science requirements to contamination-control requirements. As an example, we apply this methodology to the International X-ray Observatory (IXO), currently under joint study by ESA, JAXA, and NASA.

  7. ISS Payload Racks Automated Flow Control Calibration Method

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris G.

    2003-01-01

    Payload Racks utilize MTL and/or LTL station water for cooling of payloads and avionics. Flow control range from valves of fully closed, to up to 300 Ibmhr. Instrument accuracies are as high as f 7.5 Ibm/hr for flow sensors and f 3 Ibm/hr for valve controller, for a total system accuracy of f 10.5 Ibm/hr. Improved methodology was developed, tested and proven that reduces accuracy of the commanded flows to less than f 1 Ibmhr. Uethodology could be packed in a "calibration kit" for on- orbit flow sensor checkout and recalibration, extending the rack operations before return to earth. -

  8. Training and calibration of interviewers for oral health literacy using the BREALD-30 in epidemiological studies.

    PubMed

    Vilella, Karina Duarte; Assunção, Luciana Reichert da Silva; Junkes, Mônica Carmem; Menezes, José Vitor Nogara Borges de; Fraiz, Fabian Calixto; Ferreira, Fernanda de Morais

    2016-08-22

    The objective of this study was to describe an interviewer training and calibration method to evaluate oral health literacy using the Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) in epidemiological studies. An experienced researcher (gold standard) conducted all training sessions. The interviewer training and calibration sessions included three different phases: theoretical training, practical training, and calibration. In the calibration phase, six interviewers (dentists) independently assessed 15 videos of individuals who had different levels of oral health literacy. Accuracy and reproducibility were evaluated using the kappa coefficient and the intraclass correlation coefficient (ICC). The percentage of agreement for each word in the instrument was also calculated. After training, the kappa values were higher than 0.911 and 0.893 for intra- and inter-rater agreement, respectively. When the results were analyzed separately for the different levels of literacy, the lowest agreement rate was found when evaluating the videos of individuals with low literacy (K = 0.871), but still within the range considered to be near-perfect agreement. The ICC values were higher than 0.990 and 0.975 for intra- and inter-rater agreement, respectively. The lowest percentage of agreement was 86.6% for the word "hipoplasia" (hypoplasia). This interviewer training and calibration method proved to be feasible and effective. Therefore, it can be used as a methodological tool in studies assessing oral health literacy using the BREALD-30.

  9. Integrated HPTLC-based Methodology for the Tracing of Bioactive Compounds in Herbal Extracts Employing Multivariate Chemometrics. A Case Study on Morus alba.

    PubMed

    Chaita, Eliza; Gikas, Evagelos; Aligiannis, Nektarios

    2017-03-01

    In drug discovery, bioassay-guided isolation is a well-established procedure, and still the basic approach for the discovery of natural products with desired biological properties. However, in these procedures, the most laborious and time-consuming step is the isolation of the bioactive constituents. A prior identification of the compounds that contribute to the demonstrated activity of the fractions would enable the selection of proper chromatographic techniques and lead to targeted isolation. The development of an integrated HPTLC-based methodology for the rapid tracing of the bioactive compounds during bioassay-guided processes, using multivariate statistics. Materials and Methods - The methanol extract of Morus alba was fractionated employing CPC. Subsequently, fractions were assayed for tyrosinase inhibition and analyzed with HPTLC. PLS-R algorithm was performed in order to correlate the analytical data with the biological response of the fractions and identify the compounds with the highest contribution. Two methodologies were developed for the generation of the dataset; one based on manual peak picking and the second based on chromatogram binning. Results and Discussion - Both methodologies afforded comparable results and were able to trace the bioactive constituents (e.g. oxyresveratrol, trans-dihydromorin, 2,4,3'-trihydroxydihydrostilbene). The suggested compounds were compared in terms of R f values and UV spectra with compounds isolated from M. alba using typical bioassay-guided process. Chemometric tools supported the development of a novel HPTLC-based methodology for the tracing of tyrosinase inhibitors in M. alba extract. All steps of the experimental procedure implemented techniques that afford essential key elements for application in high-throughput screening procedures for drug discovery purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Systematic design of membership functions for fuzzy-logic control: A case study on one-stage partial nitritation/anammox treatment systems.

    PubMed

    Boiocchi, Riccardo; Gernaey, Krist V; Sin, Gürkan

    2016-10-01

    A methodology is developed to systematically design the membership functions of fuzzy-logic controllers for multivariable systems. The methodology consists of a systematic derivation of the critical points of the membership functions as a function of predefined control objectives. Several constrained optimization problems corresponding to different qualitative operation states of the system are defined and solved to identify, in a consistent manner, the critical points of the membership functions for the input variables. The consistently identified critical points, together with the linguistic rules, determine the long term reachability of the control objectives by the fuzzy logic controller. The methodology is highlighted using a single-stage side-stream partial nitritation/Anammox reactor as a case study. As a result, a new fuzzy-logic controller for high and stable total nitrogen removal efficiency is designed. Rigorous simulations are carried out to evaluate and benchmark the performance of the controller. The results demonstrate that the novel control strategy is capable of rejecting the long-term influent disturbances, and can achieve a stable and high TN removal efficiency. Additionally, the controller was tested, and showed robustness, against measurement noise levels typical for wastewater sensors. A feedforward-feedback configuration using the present controller would give even better performance. In comparison, a previously developed fuzzy-logic controller using merely expert and intuitive knowledge performed worse. This proved the importance of using a systematic methodology for the derivation of the membership functions for multivariable systems. These results are promising for future applications of the controller in real full-scale plants. Furthermore, the methodology can be used as a tool to help systematically design fuzzy logic control applications for other biological processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  12. Sentinel-1 Precise Orbit Calibration and Validation

    NASA Astrophysics Data System (ADS)

    Monti Guarnieri, Andrea; Mancon, Simone; Tebaldini, Stefano

    2015-05-01

    In this paper, we propose a model-based procedure to calibrate and validate Sentinel-1 orbit products by the Multi-Squint (MS) phase. The technique allows to calibrate an interferometric pair geometry by refining the slave orbit with reference to the orbit of a master image. Accordingly, we state the geometric model of the InSAR phase as function of positioning errors of targets and slave track; and the MS phase model as derivative of the InSAR phase geometric model with respect to the squint angle. In this paper we focus on the TOPSAR acquisition modes of Sentinel-1 (IW and EW) assuming at the most a linear error in the known slave trajectory. In particular, we describe a dedicated methodology to prevent InSAR phase artifacts on data acquired by the TOPSAR acquisition mode. Experimental results obtained by interferometric pairs acquired by Sentinel-1 sensor will be displayed.

  13. The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality

    PubMed Central

    Del Giudice, Marco; Booth, Tom; Irwing, Paul

    2012-01-01

    Background Sex differences in personality are believed to be comparatively small. However, research in this area has suffered from significant methodological limitations. We advance a set of guidelines for overcoming those limitations: (a) measure personality with a higher resolution than that afforded by the Big Five; (b) estimate sex differences on latent factors; and (c) assess global sex differences with multivariate effect sizes. We then apply these guidelines to a large, representative adult sample, and obtain what is presently the best estimate of global sex differences in personality. Methodology/Principal Findings Personality measures were obtained from a large US sample (N = 10,261) with the 16PF Questionnaire. Multigroup latent variable modeling was used to estimate sex differences on individual personality dimensions, which were then aggregated to yield a multivariate effect size (Mahalanobis D). We found a global effect size D = 2.71, corresponding to an overlap of only 10% between the male and female distributions. Even excluding the factor showing the largest univariate ES, the global effect size was D = 1.71 (24% overlap). These are extremely large differences by psychological standards. Significance The idea that there are only minor differences between the personality profiles of males and females should be rejected as based on inadequate methodology. PMID:22238596

  14. Imaging of polysaccharides in the tomato cell wall with Raman microspectroscopy

    PubMed Central

    2014-01-01

    Background The primary cell wall of fruits and vegetables is a structure mainly composed of polysaccharides (pectins, hemicelluloses, cellulose). Polysaccharides are assembled into a network and linked together. It is thought that the percentage of components and of plant cell wall has an important influence on mechanical properties of fruits and vegetables. Results In this study the Raman microspectroscopy technique was introduced to the visualization of the distribution of polysaccharides in cell wall of fruit. The methodology of the sample preparation, the measurement using Raman microscope and multivariate image analysis are discussed. Single band imaging (for preliminary analysis) and multivariate image analysis methods (principal component analysis and multivariate curve resolution) were used for the identification and localization of the components in the primary cell wall. Conclusions Raman microspectroscopy supported by multivariate image analysis methods is useful in distinguishing cellulose and pectins in the cell wall in tomatoes. It presents how the localization of biopolymers was possible with minimally prepared samples. PMID:24917885

  15. Fourier Transform Infrared Spectroscopy (FTIR) and Multivariate Analysis for Identification of Different Vegetable Oils Used in Biodiesel Production

    PubMed Central

    Mueller, Daniela; Ferrão, Marco Flôres; Marder, Luciano; da Costa, Adilson Ben; de Cássia de Souza Schneider, Rosana

    2013-01-01

    The main objective of this study was to use infrared spectroscopy to identify vegetable oils used as raw material for biodiesel production and apply multivariate analysis to the data. Six different vegetable oil sources—canola, cotton, corn, palm, sunflower and soybeans—were used to produce biodiesel batches. The spectra were acquired by Fourier transform infrared spectroscopy using a universal attenuated total reflectance sensor (FTIR-UATR). For the multivariate analysis principal component analysis (PCA), hierarchical cluster analysis (HCA), interval principal component analysis (iPCA) and soft independent modeling of class analogy (SIMCA) were used. The results indicate that is possible to develop a methodology to identify vegetable oils used as raw material in the production of biodiesel by FTIR-UATR applying multivariate analysis. It was also observed that the iPCA found the best spectral range for separation of biodiesel batches using FTIR-UATR data, and with this result, the SIMCA method classified 100% of the soybean biodiesel samples. PMID:23539030

  16. Multivariate longitudinal data analysis with censored and intermittent missing responses.

    PubMed

    Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun

    2018-05-08

    The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.

  17. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  18. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  19. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    PubMed

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  20. Improved CRDS δ13C Stability Through New Calibration Application For CO2 and CH4

    NASA Astrophysics Data System (ADS)

    Arata, C.; Rella, C.

    2014-12-01

    Stable carbon isotope ratio measurements of CO2 and CH4 provide valuable insight into global and regional sources and sinks of the two most important greenhouse gasses. Methodologies based on Cavity Ring-Down Spectroscopy (CRDS) have been developed capable of delivering δ13C measurements with a precision greater than 0.12 permil for CO2 and 0.4 permil for CH4 (1 hour window, 5 minute average). Here we present a method to further improve this measurement's stability. We have developed a two point calibration method which corrects for δ13C drift due to a dependance on carbon species concentration. This method calibrates for both carbon species concentration as well as δ13C. We go on to show that this added stability is especially valuable when using carbon isotope data in linear regression models such as Keeling plots, where even small amounts of error can be magnified to give inconclusive results. This method is demonstrated in both laboratory and ambient atmospheric conditions, and we demonstrate how to select the calibration frequency.

  1. Calibration of modified Liulin detector for cosmic radiation measurements on-board aircraft.

    PubMed

    Kyselová, D; Ambrožová, I; Krist, P; Kubančák, J; Uchihori, Y; Kitamura, H; Ploc, O

    2015-06-01

    The annual effective doses of aircrew members often exceed the limit of 1 mSv for the public due to the increased level of cosmic radiation at the flight altitudes, and thus, it is recommended to monitor them. Aircrew dosimetry is usually performed using special computer programs mostly based on results of Monte Carlo simulations. Contemporary, detectors are used mostly for validation of these computer codes, verification of effective dose calculations and for research purposes. One of such detectors is active silicon semiconductor deposited energy spectrometer Liulin. Output quantities of measurement with the Liulin detector are the absorbed dose in silicon D and the ambient dose equivalent H*(10); to determine it, two calibrations are necessary. The purpose of this work was to develop a calibration methodology that can be used to convert signal from the detector to D independently on calibration performed at Heavy Ion Medical Accelerator facility in Chiba, Japan. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. ODERACS 2 White Spheres Optical Calibration Report

    NASA Technical Reports Server (NTRS)

    Culp, Robert D.; Gravseth, Ian; Gloor, Jason; Wantuch, Todd

    1995-01-01

    This report documents the status of the Orbital Debris Radar Calibration Spheres (ODERACS) 2 white spheres optical calibration study. The purpose of this study is to determine the spectral reflectivity and scattering characteristics in the visible wavelength region for the white spheres that were added to the project in the fall, 1994. Laboratory measurements were performed upon these objects and an analysis of the resulting data was conducted. These measurements are performed by illuminating the objects with a collimated beam of light and measuring the reflected light versus the phase angle. The phase angle is defined as the angle between the light source and the sensor, as viewed from the object. By measuring the reflected signal at the various phase angles, one is able to estimate the reflectance properties of the object. The methodology used in taking the measurements and reducing the data are presented. The results of this study will be used to support the calibration of ground-based optical instruments used in support of space debris research. Visible measurements will be made by the GEODDS, NASA and ILADOT telescopes.

  3. A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series.

    PubMed

    Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel

    2015-01-01

    Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.

  4. Structural Equation Modeling of Multivariate Time Series

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  5. An Analysis of Methods Used to Examine Gender Differences in Computer-Related Behavior.

    ERIC Educational Resources Information Center

    Kay, Robin

    1992-01-01

    Review of research investigating gender differences in computer-related behavior examines statistical and methodological flaws. Issues addressed include sample selection, sample size, scale development, scale quality, the use of univariate and multivariate analyses, regressional analysis, construct definition, construct testing, and the…

  6. 77 FR 39287 - Self-Regulatory Organizations; Chicago Mercantile Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-02

    ... Change To Adopt Changes That Would Affect Its Standard Portfolio Analysis of Risk Methodology for Certain... Rule Change CME proposes to adopt certain changes that would affect its Standard Portfolio Analysis of... calibrates the risk of portfolios, consisting of positions in highly similar and correlated futures and...

  7. Calibration of the Highway Safety Manual and development of new safety performance functions for rural multilane highways in Kansas : technical summary.

    DOT National Transportation Integrated Search

    2016-10-01

    Rural roads account for 90.3% of the 140,476 total centerline miles of roadways : in Kansas. In recent years, rural fatal crashes have accounted for about 66% : of all fatal crashes. The Highway Safety Manual (HSM) provides models and : methodologies...

  8. Preliminary radiometric calibration assessment of ALOS AVNIR-2

    USGS Publications Warehouse

    Bouvet, M.; Goryl, P.; Chander, G.; Santer, R.; Saunier, S.

    2008-01-01

    This paper summarizes the activities carried out in the frame of the data quality activities of the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2) sensor onboard the Advanced Land Observing Satellite (ALOS). Assessment of the radiometric calibration of the AVNIR-2 multi-spectral imager is achieved via three intercomparisons to currently flying sensors over the Libyan desert, during the first year of operation. AU three methodologies indicate a slight underestimation of AVNIR-2 in band 1 by 4 to 7% with respect to other sensors radiometric scale. Band 2 does not show any obvious bias. Results for band 3 are affected by saturation due to inappropriate gain setting. Two methodologies indicate no significant bias in band 4. Preliminary results indicate possible degradations of the AVNIR-2 channels, which, when modeled as an exponentially decreasing functions, have time constants of respectively 13.2 %.year-1, 8.8%.year-1 and 0.1%.year-1 in band 1, 2 and 4 (with respect to the radiometric scale of the MEdium Resolution Imaging Spectrometer, MERIS). Longer time series of AVNIR-2 data are needed to draw final conclusions. ?? 2007 IEEE.

  9. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  10. The application of item response theory in developing and validating a shortened version of the Emirate Marital Satisfaction Scale.

    PubMed

    Dodeen, Hamzeh; Al-Darmaki, Fatima

    2016-12-01

    The aim of this study was to determine the feasibility of generating a shorter version of the Emirati Marital Satisfaction Scale (EMSS) using item response theory (IRT)-based methodology. The EMSS is the first national scale used to provide an understanding of the family function and level of marital satisfaction within the cultural context of the United Arab Emirates. A sample of 1,049 Emirati married individuals from different ages, genders, places of residence, and monthly incomes participated in this study. The IRT was calibrated using X-Calibre 4.2 and the graded response model. The analysis was developed on the basis of a short form of the EMSS (7 items), which constitutes a promising alternative to the original scale for practitioners and researchers. This short version is reliable, valid, and it gives results very similar to the original scale. The results of this study confirmed the usefulness of IRT-based methodology for developing psychological and counseling scales. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Fast methodology for the reliable determination of nonylphenol in water samples by minimal labeling isotope dilution mass spectrometry.

    PubMed

    Fabregat-Cabello, Neus; Castillo, Ángel; Sancho, Juan V; González, Florenci V; Roig-Navarro, Antoni Francesc

    2013-08-02

    In this work we have developed and validated an accurate and fast methodology for the determination of 4-nonylphenol (technical mixture) in complex matrix water samples by UHPLC-ESI-MS/MS. The procedure is based on isotope dilution mass spectrometry (IDMS) in combination with isotope pattern deconvolution (IPD), which provides the concentration of the analyte directly from the spiked sample without requiring any methodological calibration graph. To avoid any possible isotopic effect during the analytical procedure the in-house synthesized (13)C1-4-(3,6-dimethyl-3-heptyl)phenol was used as labeled compound. This proposed surrogate was able to compensate the matrix effect even from wastewater samples. A SPE pre-concentration step together with exhaustive efforts to avoid contamination were included to reach the signal-to-noise ratio necessary to detect the endogenous concentrations present in environmental samples. Calculations were performed acquiring only three transitions, achieving limits of detection lower than 100ng/g for all water matrix assayed. Recoveries within 83-108% and coefficients of variation ranging from 1.5% to 9% were obtained. On the contrary a considerable overestimation was obtained with the most usual classical calibration procedure using 4-n-nonylphenol as internal standard, demonstrating the suitability of the minimal labeling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Company profile: PGXIS Ltd.

    PubMed

    McCarthy, Alun

    2011-09-01

    Pharmacogenomic Innovative Solutions Ltd (PGXIS) was established in 2007 by a group of pharmacogenomic (PGx) experts to make their expertise available to biotechnology and pharmaceutical companies. PGXIS has subsequently established a network of experts to broaden its access to relevant PGx knowledge and technologies. In addition, it has developed a novel multivariate analysis method called Taxonomy3 which is both a data integration tool and a targeting tool. Together with siRNA methodology from CytoPathfinder Inc., PGXIS now has an extensive range of diverse PGx methodologies focused on enhancing drug development.

  13. Analytical aspects of plant metabolite profiling platforms: current standings and future aims.

    PubMed

    Seger, Christoph; Sturm, Sonja

    2007-02-01

    Over the past years, metabolic profiling has been established as a comprehensive systems biology tool. Mass spectrometry or NMR spectroscopy-based technology platforms combined with unsupervised or supervised multivariate statistical methodologies allow a deep insight into the complex metabolite patterns of plant-derived samples. Within this review, we provide a thorough introduction to the analytical hard- and software requirements of metabolic profiling platforms. Methodological limitations are addressed, and the metabolic profiling workflow is exemplified by summarizing recent applications ranging from model systems to more applied topics.

  14. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Modelling exploration of non-stationary hydrological system

    NASA Astrophysics Data System (ADS)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-04-01

    Traditional hydrological modelling assumes that the catchment does not change with time (i.e., stationary conditions) which means the model calibrated for the historical period is valid for the future period. However, in reality, due to change of climate and catchment conditions this stationarity assumption may not be valid in the future. It is a challenge to make the hydrological model adaptive to the future climate and catchment conditions that are not observable at the present time. In this study a lumped conceptual rainfall-runoff model called IHACRES was applied to a catchment in southwest England. Long observation data from 1961 to 2008 were used and seasonal calibration (in this study only summer period is further explored because it is more sensitive to climate and land cover change than the other three seasons) has been done since there are significant seasonal rainfall patterns. We expect that the model performance can be improved by calibrating the model based on individual seasons. The data is split into calibration and validation periods with the intention of using the validation period to represent the future unobserved situations. The success of the non-stationary model will depend not only on good performance during the calibration period but also the validation period. Initially, the calibration is based on changing the model parameters with time. Methodology is proposed to adapt the parameters using the step forward and backward selection schemes. However, in the validation both the forward and backward multiple parameter changing models failed. One problem is that the regression with time is not reliable since the trend may not be in a monotonic linear relationship with time. The second issue is that changing multiple parameters makes the selection process very complex which is time consuming and not effective in the validation period. As a result, two new concepts are explored. First, only one parameter is selected for adjustment while the other parameters are set as constant. Secondly, regression is made against climate condition instead of against time. It has been found that such a new approach is very effective and this non-stationary model worked very well both in the calibration and validation period. Although the catchment is specific in southwest England and the data are for only the summer period, the methodology proposed in this study is general and applicable to other catchments. We hope this study will stimulate the hydrological community to explore a variety of sites so that valuable experiences and knowledge could be gained to improve our understanding of such a complex modelling issue in climate change impact assessment.

  16. Prediction of road accidents: A Bayesian hierarchical approach.

    PubMed

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H

    2013-03-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Sample classification for improved performance of PLS models applied to the quality control of deep-frying oils of different botanic origins analyzed using ATR-FTIR spectroscopy.

    PubMed

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2011-01-01

    The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).

  18. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  19. Towards standardized testing methodologies for optical properties of components in concentrating solar thermal power plants

    NASA Astrophysics Data System (ADS)

    Sallaberry, Fabienne; Fernández-García, Aránzazu; Lüpfert, Eckhard; Morales, Angel; Vicente, Gema San; Sutter, Florian

    2017-06-01

    Precise knowledge of the optical properties of the components used in the solar field of concentrating solar thermal power plants is primordial to ensure their optimum power production. Those properties are measured and evaluated by different techniques and equipment, in laboratory conditions and/or in the field. Standards for such measurements and international consensus for the appropriate techniques are in preparation. The reference materials used as a standard for the calibration of the equipment are under discussion. This paper summarizes current testing methodologies and guidelines for the characterization of optical properties of solar mirrors and absorbers.

  20. Quasidynamic calibration of stroboscopic scanning white light interferometer with a transfer standard

    NASA Astrophysics Data System (ADS)

    Seppä, Jeremias; Kassamakov, Ivan; Heikkinen, Ville; Nolvi, Anton; Paulin, Tor; Lassila, Antti; Hæggström, Edward

    2013-12-01

    A stroboscopic scanning white light interferometer (SSWLI) can characterize both static features and motion in micro(nano)electromechanical system devices. SSWLI measurement results should be linked to the meter definition to be comparable and unambiguous. This traceability is achieved by careful error characterization and calibration of the interferometer. The main challenge in vertical scale calibration is to have a reference device with reproducible out-of-plane movement. A piezo-scanned flexure guided stage with capacitive sensor feedback was attached to a mirror and an Invar steel holder with a reference plane-forming a transfer standard that was calibrated by laser interferometry with 2.3 nm uncertainty. The moving mirror vertical position was then measured with the SSWLI, relative to the reference plane, between successive mirror position steppings. A light-emitting diode pulsed at 100 Hz with 0.5% duty cycle synchronized to the CCD camera and a halogen light source were used. Inside the scanned 14 μm range, the measured SSWLI scale amplification coefficient error was 0.12% with 4.5 nm repeatability of the steps. For SWLI measurements using a halogen lamp, the corresponding results were 0.05% and 6.7 nm. The presented methodology should permit accurate traceable calibration of the vertical scale of any SWLI.

Top