Science.gov

Sample records for multivariate calibration applied

  1. Multivariate calibration applied to the quantitative analysis of infrared spectra

    SciTech Connect

    Haaland, D.M.

    1991-01-01

    Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.

  2. Multivariation calibration techniques applied to NIRA (near infrared reflectance analysis) and FTIR (Fourier transform infrared) data

    NASA Astrophysics Data System (ADS)

    Long, C. L.

    1991-02-01

    Multivariate calibration techniques can reduce the time required for routine testing and can provide new methods of analysis. Multivariate calibration is commonly used with near infrared reflectance analysis (NIRA) and Fourier transform infrared (FTIR) spectroscopy. Two feasibility studies were performed to determine the capability of NIRA, using multivariate calibration techniques, to perform analyses on the types of samples that are routinely analyzed at this laboratory. The first study performed included a variety of samples and indicated that NIRA would be well-suited to perform analyses on selected materials properties such as water content and hydroxyl number on polyol samples, epoxy content on epoxy resins, water content of desiccants, and the amine values of various amine cure agents. A second study was performed to assess the capability of NIRA to perform quantitative analysis of hydroxyl numbers and water contents of hydroxyl-containing materials. Hydroxyl number and water content were selected for determination because these tests are frequently run on polyol materials and the hydroxyl number determination is time consuming. This study pointed out the necessity of obtaining calibration standards identical to the samples being analyzed for each type of polyol or other material being analyzed. Multivariate calibration techniques are frequently used with FTIR data to determine the composition of a large variety of complex mixtures. A literature search indicated many applications of multivariate calibration to FTIR data. Areas identified where quantitation by FTIR would provide a new capability are quantitation of components in epoxy and silicone resins, polychlorinated biphenyls (PCBs) in oils, and additives to polymers.

  3. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  4. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  5. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  6. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  7. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery∗

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2016-01-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O(1/ϵ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/. PMID:28316509

  8. Multivariate Seismic Calibration for the Novaya Zemlya Test Site

    DTIC Science & Technology

    1992-09-30

    every multivariate magnitude combination. A classical confidence interval is presented to estimate future yields, based on estimates of the unknown...multivariate calibration parameters. A test of TTBT compliance and a definition of the F-number, based on the confidence interval , are also provided. F

  9. Calibrated predictions for multivariate competing risks models.

    PubMed

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  10. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    PubMed Central

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  11. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  12. Multivariate analysis applied to tomato hybrid production.

    PubMed

    Balasch, S; Nuez, F; Palomares, G; Cuartero, J

    1984-11-01

    Twenty characters were measured on 60 tomato varieties cultivated in the open-air and in polyethylene plastic-house. Data were analyzed by means of principal components, factorial discriminant methods, Mahalanobis D(2) distances and principal coordinate techniques. Factorial discriminant and Mahalanobis D(2) distances methods, both of which require collecting data plant by plant, lead to similar conclusions as the principal components method that only requires taking data by plots. Characters that make up the principal components in both environments studied are the same, although the relative importance of each one of them varies within the principal components. By combining information supplied by multivariate analysis with the inheritance mode of characters, crossings among cultivars can be experimented with that will produce heterotic hybrids showing characters within previously established limits.

  13. Ultrasonic sensor for predicting sugar concentration using multivariate calibration.

    PubMed

    Krause, D; Hussein, W B; Hussein, M A; Becker, T

    2014-08-01

    This paper presents a multivariate regression method for the prediction of maltose concentration in aqueous solutions. For this purpose, time and frequency domain of ultrasonic signals are analyzed. It is shown, that the prediction of concentration at different temperatures is possible by using several multivariate regression models for individual temperature points. Combining these models by a linear approximation of each coefficient over temperature results in a unified solution, which takes temperature effects into account. The benefit of the proposed method is the low processing time required for analyzing online signals as well as the non-invasive sensor setup which can be used in pipelines. Also the ultrasonic signal sections used in the presented investigation were extracted out of buffer reflections which remain primarily unaffected by bubble and particle interferences. Model calibration was performed in order to investigate the feasibility of online monitoring in fermentation processes. The temperature range investigated was from 10 °C to 21 °C. This range fits to fermentation processes used in the brewing industry. This paper describes the processing of ultrasonic signals for regression, the model evaluation as well as the input variable selection. The statistical approach used for creating the final prediction solution was partial least squares (PLS) regression validated by cross validation. The overall minimum root mean squared error achieved was 0.64 g/100 g.

  14. Classification of foodborne pathogens using near infrared (NIR) laser scatter imaging system with multivariate calibration

    PubMed Central

    Pan, Wenxiu; Zhao, Jiewen; Chen, Quansheng

    2015-01-01

    An optical sensor system, namely NIR laser scatter imaging system, was developed for rapid and noninvasive classification of foodborne pathogens. This developed system was used for images acquisition. The current study is focused on exploring the potential of this system combined with multivariate calibrations in classifying three categories of popular bacteria. Initially, normalization and Zernike moments extraction were performed, and the resultant translation, scale and rotation invariances were applied as the characteristic variables for subsequent discriminant analysis. Both linear (LDA, KNN and PLSDA) and nonlinear (BPANN, SVM and OSELM) pattern recognition methods were employed comparatively for modeling, and optimized by cross validation. Experimental results showed that the performances of nonlinear tools were superior to those of linear tools, especially for OSELM model with 95% discrimination rate in the prediction set. The overall results showed that it is extremely feasible for rapid and noninvasive classifying foodborne pathogens using this developed system combined with appropriate multivariate calibration. PMID:25860918

  15. Determination of intrinsic viscosity of poly(ethylene terephthalate) using infrared spectroscopy and multivariate calibration method.

    PubMed

    Silva Spinacé, M A; Lucato, M U; Ferrão, M F; Davanzo, C U; De Paoli, M-A

    2006-05-15

    A methodology was developed to determine the intrinsic viscosity of poly(ethylene terephthalate) (PET) using diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and multivariate calibration (MVC) methods. Multivariate partial least squares calibration was applied to the spectra using mean centering and cross validation. The results were correlated to the intrinsic viscosities determined by the standard chemical method (ASTM D 4603-01) and a very good correlation for values in the range from 0.346 to 0.780dLg(-1) (relative viscosity values ca. 1.185-1.449) was observed. The spectrophotometer detector sensitivity and the humidity of the samples did not influence the results. The methodology developed is interesting because it does not produce hazardous wastes, avoids the use of time-consuming chemical methods and can rapidly predict the intrinsic viscosity of PET samples over a large range of values, which includes those of recycled materials.

  16. A Bayesian, multivariate calibration for Globigerinoides ruber Mg/Ca

    NASA Astrophysics Data System (ADS)

    Khider, D.; Huerta, G.; Jackson, C.; Stott, L. D.; Emile-Geay, J.

    2015-09-01

    The use of Mg/Ca in marine carbonates as a paleothermometer has been challenged by observations that implicate salinity as a contributing influence on Mg incorporation into biotic calcite and that dissolution at the sea-floor alters the original Mg/Ca. Yet, these factors have not yet been incorporated into a single calibration model. We introduce a new Bayesian calibration for Globigerinoides ruber Mg/Ca based on 186 globally distributed core top samples, which explicitly takes into account the effect of temperature, salinity, and dissolution on this proxy. Our reported temperature, salinity, and dissolution (here expressed as deep-water ΔCO32-) sensitivities are (±2σ) 8.7±0.9%/°C, 3.9±1.2%/psu, and 3.3±1.3%/μmol.kg-1 below a critical threshold of 21 μmol/kg in good agreement with previous culturing and core-top studies. We then perform a sensitivity experiment on a published record from the western tropical Pacific to investigate the bias introduced by these secondary influences on the interpretation of past temperature variability. This experiment highlights the potential for misinterpretations of past oceanographic changes when the secondary influences of salinity and dissolution are not accounted for. Multiproxy approaches could potentially help deconvolve the contributing influences but this awaits better characterization of the spatio-temporal relationship between salinity and δ18Osw over millennial and orbital timescales.

  17. Application of Fluorescence Spectrometry With Multivariate Calibration to the Enantiomeric Recognition of Fluoxetine in Pharmaceutical Preparations.

    PubMed

    Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana

    2016-04-01

    Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.

  18. Variety identification of brown sugar using short-wave near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Yang, Haiqing; Wu, Di; He, Yong

    2007-11-01

    Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.

  19. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    SciTech Connect

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  20. Multivariate calibration for the determination of total azadirachtin-related limonoids and simple terpenoids in neem extracts using vanillin assay.

    PubMed

    Dai, J; Yaylayan, V A; Raghavan, G S; Parè, J R; Liu, Z

    2001-03-01

    Two-component and multivariate calibration techniques were developed for the simultaneous quantification of total azadirachtin-related limonoids (AZRL) and simple terpenoids (ST) in neem extracts using vanillin assay. A mathematical modeling method was also developed to aid in the analysis of the spectra and to simplify the calculations. The mathematical models were used in a two-component calibration (using azadirachtin and limonene as standards) for samples containing mainly limonoids and terpenoids (such as neem seed kernel extracts). However, for the extracts from other parts of neem, such as neem leaf, a multivariate calibration was necessary to eliminate the possible interference from phenolics and other components in order to obtain the accurate content of AZRL and ST. It was demonstrated that the accuracy of the vanillin assay in predicting the content of azadirachtin in a model mixture containing limonene (25% w/w) can be improved from 50% overestimation to 95% accuracy using the two-component calibration, while predicting the content of limonene with 98% accuracy. Both calibration techniques were applied to estimate the content of AZRL and ST in different parts of the neem plant. The results of this study indicated that the relative content of limonoids was much higher than that of the terpenoids in all parts of the neem plant studied.

  1. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  2. Determination of imidacloprid in water samples via photochemically induced fluorescence and second-order multivariate calibration.

    PubMed

    Fuentes, Edwar; Cid, Camila; Báez, María E

    2015-03-01

    This paper presents a new method for the determination of imidacloprid in water samples; one of the most widely used neonicotinoid pesticides in the farming industry. The method is based on the measurement of excitation-emission spectra of photo-induced fluorescence (PIF-EEMs) associated with second-order multivariate calibration with a parallel factor analysis (PARAFAC) and unfolded partial least squares coupled to residual bilinearization (U-PLS/RBL). The second order advantage permitted the determination of imidacloprid in the presence of potential interferences, which also shows photo-induced fluorescence (other pesticides and/or unexpected compounds of the real samples). The photoreaction was performed in 100-μl disposable micropipettes. As a preliminary step, solid phase extraction on C18 (SPE-C18) was applied to concentrate the analyte and diminish the limit of detection. The LOD was approximately 1 ng mL(-1), which is suitable for detecting imidacloprid in water according to the guidelines established in North America and Europe. The PIF-EEMs coupled to PARAFAC or U-PLS/RBL was successfully applied for the determination of imidacloprid in different real water samples, with an average recovery of 101±10%.

  3. Application of Multivariate Linear and Nonlinear Calibration and Classification Methods in Drug Design.

    PubMed

    Abdolmaleki, Azizeh; Ghasemi, Jahan B; Shiri, Fereshteh; Pirhadi, Somayeh

    2015-01-01

    Data manipulation and maximum efficient extraction of useful information need a range of searching, modeling, mathematical, and statistical approaches. Hence, an adequate multivariate characterization is the first necessary step in investigation and the results are interpreted after multivariate analysis. Multivariate data analysis is capable of not only large dataset management but also interpret them surely and rapidly. Application of chemometrics and cheminformatics methods may be useful for design and discovery of new drug compounds. In this review, we present a variety of information sources on chemometrics, which we consider useful in different fields of drug design. This review describes exploratory analysis (PCA), classification and multivariate calibration (PCR, PLS) methods to data analysis. It summarizes the main facts of linear and nonlinear multivariate data analysis in drug discovery and provides an introduction to manipulation of data in this field. It handles the fundamental aspects of basic concepts of multivariate methods, principles of projections (PCA and PLS) and introduces the popular modeling and classification techniques. Enough theory behind these methods, more particularly concerning the chemometrics tools is included for those with little experience in multivariate data analysis techniques such as PCA, PLS, SIMCA, etc. We describe each method by avoiding unnecessary equations, and details of calculation algorithms. It provides a synopsis of the method followed by cases of applications in drug design (i.e., QSAR) and some of the features for each method.

  4. Improved Multivariate Calibration Models for Corn Stover Feedstock and Dilute-Acid Pretreated Corn Stover

    SciTech Connect

    Wolfrum, E. J.; Sluiter, A. D.

    2009-01-01

    We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.

  5. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  6. Predicting coliform concentrations in upland impoundments: design and calibration of a multivariate model.

    PubMed Central

    Kay, D; McDonald, A

    1983-01-01

    This paper reports on the calibration and use of a multiple regression model designed to predict concentrations of Escherichia coli and total coliforms in two upland British impoundments. The multivariate approach has improved predictive capability over previous univariate linear models because it includes predictor variables for the timing and magnitude of hydrological input to the reservoirs and physiochemical parameters of water quality. The significance of these results for catchment management research is considered. PMID:6639016

  7. Simultaneous Determination of Metamizole, Thiamin and Pyridoxin Using UV-Spectroscopy in Combination with Multivariate Calibration

    PubMed Central

    Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul

    2015-01-01

    Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934

  8. Enzymatic electrochemical detection coupled to multivariate calibration for the determination of phenolic compounds in environmental samples.

    PubMed

    Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel

    2013-03-15

    An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills.

  9. Multi-energy calibration applied to atomic spectrometry.

    PubMed

    Virgilio, Alex; Gonçalves, Daniel A; McSweeney, Tina; Gomes Neto, José A; Nóbrega, Joaquim A; Donati, George L

    2017-08-22

    Multi-energy calibration (MEC) is a novel strategy that explores the capacity of several analytes of generating analytical signals at many different wavelengths (transition energies). Contrasting with traditional methods, which employ a fixed transition energy and different analyte concentrations to build a calibration plot, MEC uses a fixed analyte concentration and multiple transition energies for calibration. Only two calibration solutions are required in combination with the MEC method. Solution 1 is composed of 50% v v(-1) sample and 50% v v(-1) of a standard solution containing the analytes. Solution 2 has 50% v v(-1) sample and 50% v v(-1) blank. Calibration is performed by running each solution separately and monitoring the instrument response at several wavelengths for each analyte. Analytical signals from solutions 1 and 2 are plotted on the x-axis and y-axis, respectively, and the analyte concentration in the sample is calculated from the slope of the resulting calibration curve. The method has been applied to three different atomic spectrometric techniques (ICP OES, MIP OES and HR-CS FAAS). Six analytes were determined in complex samples (e.g. green tea, cola soft drink, cough medicine, soy sauce, and red wine), and the results were comparable with, and in several cases more accurate than, values obtained using the traditional external calibration, internal standardization, and standard additions methods. MEC is a simple, fast and efficient matrix-matching calibration method. It may be applied to any technique capable of simultaneous or fast sequential monitoring of multiple analytical signals. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A novel algorithm for linear multivariate calibration based on the mixed model of samples.

    PubMed

    Wu, Xuemei; Liu, Zhiqiang; Li, Hua

    2013-11-01

    We present a novel algorithm for linear multivariate calibration that can generate good prediction results. This is accomplished by the idea of that testing samples are mixed by the calibration samples in proper proportion. The algorithm is based on the mixed model of samples and is therefore called MMS algorithm. With both theoretical support and analysis of two data sets, it is demonstrated that MMS algorithm produces lower prediction errors than partial least squares (PLS2) model, has similar prediction performance to PLS1. In the anti-interference test of background, MMS algorithm performs better than PLS2. At the condition of the lack of some component information, MMS algorithm shows better robustness than PLS2. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Multivariate calibration for physiological samples using infrared spectra with choice of different intensity data

    NASA Astrophysics Data System (ADS)

    Heise, H. M.; Bittner, A.

    1995-03-01

    A multicomponent assay of several blood substrates is presented for human plasma using mid-infrared spectra recorded by the attenuated total reflection technique. Two different sample populations were analysed: a pool plasma spiked with different amounts of glucose and a hospital population of samples from 126 different patients. Partial least-squares was used for multivariate calibration based on spectral intervals in the fingerprint region selected for optimum prediction modeling. Different data, either plasma absorbance spectra with compensation of the water signal or single beam spectra, were considered. Results are reported for protein, glucose, cholesterol, triglycerides and urea.

  12. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.

    2016-03-01

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.

  13. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.

    PubMed

    Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J

    2016-03-15

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.

  14. Interactive visualization applied to multivariate geochemical data: A case study

    NASA Astrophysics Data System (ADS)

    Grünfeld, K.

    2003-05-01

    Geochemical survey data have commonly been analysed combining methods from several disciplinesstatistics, geostatistics, geographic information technology, visualization. In initial stages of analysis, tables are often used to describe the data and present statistical measures. Far too often the original data are manipulated in one or another way, for example, using mathematical transformations, or interpolation of points to a surface. It is the author's opinion that raw geochemical data should be used in initial stages of data description, thus preserving the original details. This is not a simple task, as geochemical data are commonly complex, multivariate, and collected on irregular grid. Data contain outliers, element contents vary within thousands of ppm (parts per million), and different chemical elements may be correlated. In the present study a graphical approach has been used to study distribution of5heavy metals in glacial till. Using interactive visualization and multiple linked views of the data, the following issues were addressed: multi-element outliers, spatial trends, multi-element correlations and patterns. Interactive graphical techniques proved to be especially suitable for studying outliers and identifying and locating samples that are redundant and may be removed from data without loss of information. Visualization using linked views gave valable insights about metal correlations and spatial trends. As the development of appropriate tools for analysing multivariate spatial data are still in its early stages, visualization freeware seems to be a good alternative providing powerful, easy to use and intuitive techniques for exploratory data analysis.

  15. Bivariate versus multivariate smart spectrophotometric calibration methods for the simultaneous determination of a quaternary mixture of mosapride, pantoprazole and their degradation products.

    PubMed

    Hegazy, M A; Yehia, A M; Moustafa, A A

    2013-05-01

    The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.

  16. Simultaneous analysis of riboflavin and aromatic amino acids in beer using fluorescence and multivariate calibration methods.

    PubMed

    Sikorska, Ewa; Gliszczyńska-Swigło, Anna; Insińska-Rak, Małgorzata; Khmelinskii, Igor; De Keukeleire, Denis; Sikorski, Marek

    2008-04-21

    The study demonstrates an application of the front-face fluorescence spectroscopy combined with multivariate regression methods to the analysis of fluorescent beer components. Partial least-squares regressions (PLS1, PLS2, and N-way PLS) were utilized to develop calibration models between synchronous fluorescence spectra and excitation-emission matrices of beers, on one hand, and analytical concentrations of riboflavin and aromatic amino acids, on the other hand. The best results were obtained in the analysis of excitation-emission matrices using the N-way PLS2 method. The respective correlation coefficients, and the values of the root mean-square error of cross-validation (RMSECV), expressed as percentages of the respective mean analytic concentrations, were: 0.963 and 14% for riboflavin, 0.974 and 4% for tryptophan, 0.980 and 4% for tyrosine, and 0.982 and 19% for phenylalanine.

  17. Rapid detection of whey in milk powder samples by spectrophotometric and multivariate calibration.

    PubMed

    de Carvalho, Bruna Mara Aparecida; de Carvalho, Lorendane Millena; dos Reis Coimbra, Jane Sélia; Minim, Luis Antônio; de Souza Barcellos, Edilton; da Silva Júnior, Willer Ferreira; Detmann, Edenio; de Carvalho, Gleidson Giordano Pinto

    2015-05-01

    A rapid method for the detection and quantification of the adulteration of milk powder by the addition of whey was assessed by measuring glycomacropeptide protein using mid-infrared spectroscopy (MIR). Fluid milk samples were dried and then spiked with different concentrations of GMP and whey. Calibration models were developed using multivariate techniques, from spectral data. For the principal component analysis and discriminant analysis, excellent percentages of correct classification were achieved in accordance with the increase in the proportion of whey samples. For partial least squares regression analysis, the correlation coefficient (r) and root mean square error of prediction (RMSEP) in the best model were 0.9885 and 1.17, respectively. The rapid analysis, low cost monitoring and high throughput number of samples tested per unit time indicate that MIR spectroscopy may hold potential as a rapid and reliable method for detecting milk powder frauds using cheese whey.

  18. Handling complex effects in slurry-sampling-electrothermal atomic absorption spectrometry by multivariate calibration.

    PubMed

    Felipe-Sotelo, M; Cal-Prieto, M J; Gómez-Carracedo, M P; Andrade, J M; Carlosena, A; Prada, D

    2006-07-07

    Analysis of solid samples by slurry-sampling-electrothermal atomic absorption spectrometry (SS-ETAAS) can imply spectral and chemical interferences caused by the large amount of concomitants introduced into the graphite furnace. Sometimes they cannot be solved using stabilized temperature platform furnace (STPF) conditions or typical approaches (previous sample ashing, use of chemical modifiers, etc.), which are time consuming and quite expensive. A new approach to handle interferences using multivariate calibrations (partial least squares, PLS, and artificial neural networks, ANN) is presented and exemplified with a real problem consisting on determining Sb in several solid matrices (soils, sediments and coal fly ash) as slurries by ETAAS. Experimental designs were implemented at different levels of Sb to develop the calibration matrix and assess which concomitants (seven ions were considered) modified the atomic signal mostly. They were Na+ and Ca2+ and they induced simultaneous displacement, depletion (enhancement) and broadening of the atomic peak. Here it is shown that these complex effects can be handled in a reliable, fast and cost-effective way to predict the concentration of Sb in slurry samples of several solid matrices. The method was validated predicting the concentrations of five certified reference materials (CRMs) and studying its robustness to current ETAAS problems. It is also shown that linear PLS can handle eventual non-linearities and that its results are comparable to more complex (non-linear) models, as those from ANNs.

  19. An ensemble method based on uninformative variable elimination and mutual information for spectral multivariate calibration

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Wang, Jinyue; Wu, Tong; Qin, Xin; Li, Menglong

    2010-12-01

    Based on the combination of uninformative variable elimination (UVE), bootstrap and mutual information (MI), a simple ensemble algorithm, named ESPLS, is proposed for spectral multivariate calibration (MVC). In ESPLS, those uninformative variables are first removed; and then a preparatory training set is produced by bootstrap, on which a MI spectrum of retained variables is calculated. The variables that exhibit higher MI than a defined threshold form a subspace on which a candidate partial least-squares (PLS) model is constructed. This process is repeated. After a number of candidate models are obtained, a small part of models is picked out to construct an ensemble model by simple/weighted average. Four near/mid-infrared (NIR/MIR) spectral datasets concerning the determination of six components are used to verify the proposed ESPLS. The results indicate that ESPLS is superior to UVEPLS and its combination with MI-based variable selection (SPLS) in terms of both the accuracy and robustness. Besides, from the perspective of end-users, ESPLS does not increase the complexity of a calibration when enhancing its performance.

  20. Design of multivariable feedback control systems via spectral assignment. [as applied to aircraft flight control

    NASA Technical Reports Server (NTRS)

    Liberty, S. R.; Mielke, R. R.; Tung, L. J.

    1981-01-01

    Applied research in the area of spectral assignment in multivariable systems is reported. A frequency domain technique for determining the set of all stabilizing controllers for a single feedback loop multivariable system is described. It is shown that decoupling and tracking are achievable using this procedure. The technique is illustrated with a simple example.

  1. Multivariate calibration and instrument standardization for the rapid detection of diethylene glycol in glycerin by Raman spectroscopy.

    PubMed

    Gryniewicz-Ruzicka, Connie M; Arzhantsev, Sergey; Pelster, Lindsey N; Westenberger, Benjamin J; Buhse, Lucinda F; Kauffman, John F

    2011-03-01

    The transfer of a multivariate calibration model for quantitative determination of diethylene glycol (DEG) contaminant in pharmaceutical-grade glycerin between five portable Raman spectrometers was accomplished using piecewise direct standardization (PDS). The calibration set was developed using a multi-range ternary mixture design with successively reduced impurity concentration ranges. It was found that optimal selection of calibration transfer standards using the Kennard-Stone algorithm also required application of the algorithm to multiple successively reduced impurity concentration ranges. Partial least squares (PLS) calibration models were developed using the calibration set measured independently on each of the five spectrometers. The performance of the models was evaluated based on the root mean square error of prediction (RMSEP), calculated using independent validation samples. An F-test showed that no statistical differences in the variances were observed between models developed on different instruments. Direct cross-instrument prediction without standardization was performed between a single primary instrument and each of the four secondary instruments to evaluate the robustness of the primary instrument calibration model. Significant increases in the RMSEP values for the secondary instruments were observed due to instrument variability. Application of piecewise direct standardization using the optimal calibration transfer subset resulted in the lowest values of RMSEP for the secondary instruments. Using the optimal calibration transfer subset, an optimized calibration model was developed using a subset of the original calibration set, resulting in a DEG detection limit of 0.32% across all five instruments.

  2. A strategy that iteratively retains informative variables for selecting optimal variable subset in multivariate calibration.

    PubMed

    Yun, Yong-Huan; Wang, Wei-Ting; Tan, Min-Li; Liang, Yi-Zeng; Li, Hong-Dong; Cao, Dong-Sheng; Lu, Hong-Mei; Xu, Qing-Song

    2014-01-07

    Nowadays, with a high dimensionality of dataset, it faces a great challenge in the creation of effective methods which can select an optimal variables subset. In this study, a strategy that considers the possible interaction effect among variables through random combinations was proposed, called iteratively retaining informative variables (IRIV). Moreover, the variables are classified into four categories as strongly informative, weakly informative, uninformative and interfering variables. On this basis, IRIV retains both the strongly and weakly informative variables in every iterative round until no uninformative and interfering variables exist. Three datasets were employed to investigate the performance of IRIV coupled with partial least squares (PLS). The results show that IRIV is a good alternative for variable selection strategy when compared with three outstanding and frequently used variable selection methods such as genetic algorithm-PLS, Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) and competitive adaptive reweighted sampling (CARS). The MATLAB source code of IRIV can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Potentiometric sensor array for the determination of lysine in feed samples using multivariate calibration methods.

    PubMed

    García-Villar, N; Saurina, J; Hernández-Cassou, S

    2001-12-01

    A potentiometric sensor array has been developed for the determination of lysine in feed samples. The sensor array consists of a lysine biosensor and seven ion-selective electrodes for NH4+, K+, Na+, Ca2+, Mg2+, Li+, and H+, all based on all-solid-state technology. The potentiometric lysine biosensor comprises a lysine oxidase membrane assembled on an NH4+ electrode. Because the selectivity of the lysine biosensor towards other cation species is not sufficient, there is severe interference with the potentiometric response. This poor selectivity can be circumvented mathematically by analysis of the richer information contained in the multi-sensor data. The sensor array takes advantage of the cross-selectivity of lysine for each electrode, which differs from the other species and quantification of lysine in complex feed sample extracts is accomplished with multivariate calibration methods, such as partial least-squares regression. The results obtained are in a reasonable agreement with those given by the standard method for amino acid analysis.

  4. Square wave voltammetry with multivariate calibration tools for determination of eugenol, carvacrol and thymol in honey.

    PubMed

    Tonello, Natalia; Moressi, Marcela Beatriz; Robledo, Sebastián Noel; D'Eramo, Fabiana; Marioli, Juan Miguel

    2016-09-01

    The simultaneous determination of eugenol (EU), thymol (Ty) and carvacrol (CA) in honey samples, employing square wave voltammetry (SWV) and chemometrics tools, is informed for the first time. For this purpose, a glassy carbon electrode (GCE) was used as working electrode. The operating conditions and influencing parameters (involving several chemical and instrumental parameters) were first optimized by cyclic voltammetry (CV). Thus, the effects of the scan rate, pH and analyte concentration on the electrochemical response of the above mentioned molecules were studied. The results show that the electrochemical responses of the three compounds are very similar and that the voltammetric traces present a high degree of overlap under all the experimental conditions used in this study. Therefore, two chemometric tools were tested to obtain the multivariate calibration model. One method was the partial least squares regression (PLS-1), which assumes a linear behaviour. The other nonlinear method was an artificial neural network (ANN). In this last case we used a supervised, feed-forward network with Levenberg-Marquardt back propagation training. From the accuracies and precisions analysis between nominal and estimated concentrations calculated by using both methods, it was inferred that the ANN method was a good model to quantify EU, Ty and CA in honey samples. Recovery percentages were between 87% and 104%, except for two samples whose values were 136% and 72%. The analytical methodology was simple, fast and accurate.

  5. Simultaneous determination of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate in pharmaceutical preparations using multivariate calibration 1.

    PubMed

    Samadi-Maybodi, Abdolraouf; Hassani Nejad-Darzi, Seyed Karim

    2010-04-01

    Resolution of binary mixtures of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable, principal component regression and hybrid linear analysis. Data of analysis were obtained from UV-vis spectra of the above compounds. The method of central composite design was used in the ranges of 1-15 mg L(-1) for both calibration and validation sets. The models refinement procedure and their validation were performed by cross-validation. Figures of merit such as selectivity, sensitivity, analytical sensitivity and limit of detection were determined for all three compounds. The procedure was successfully applied to simultaneous determination of the above compounds in pharmaceutical tablets.

  6. Simultaneous determination of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate in pharmaceutical preparations using multivariate calibration 1

    NASA Astrophysics Data System (ADS)

    Samadi-Maybodi, Abdolraouf; Hassani Nejad-Darzi, Seyed Karim

    2010-04-01

    Resolution of binary mixtures of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable, principal component regression and hybrid linear analysis. Data of analysis were obtained from UV-vis spectra of the above compounds. The method of central composite design was used in the ranges of 1-15 mg L -1 for both calibration and validation sets. The models refinement procedure and their validation were performed by cross-validation. Figures of merit such as selectivity, sensitivity, analytical sensitivity and limit of detection were determined for all three compounds. The procedure was successfully applied to simultaneous determination of the above compounds in pharmaceutical tablets.

  7. Multivariate calibration modeling of liver oxygen saturation using near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Cingo, Ndumiso A.; Soller, Babs R.; Puyana, Juan C.

    2000-05-01

    The liver has been identified as an ideal site to spectroscopically monitor for changes in oxygen saturation during liver transplantation and shock because it is susceptible to reduced blood flow and oxygen transport. Near-IR spectroscopy, combined with multivariate calibration techniques, has been shown to be a viable technique for monitoring oxygen saturation changes in various organs in a minimally invasive manner. The liver has a dual system circulation. Blood enters the liver through the portal vein and hepatic artery, and leaves through the hepatic vein. Therefore, it is of utmost importance to determine how the liver NIR spectroscopic information correlates with the different regions of the hepatic lobule as the dual circulation flows from the presinusoidal space into the post sinusoidal region of the central vein. For NIR spectroscopic information to reliably represent the status of liver oxygenation, the NIR oxygen saturation should best correlate with the post-sinusoidal region. In a series of six pigs undergoing induced hemorrhagic chock, NIR spectra collected from the liver were used together with oxygen saturation reference data from the hepatic and portal veins, and an average of the two to build partial least-squares regression models. Results obtained from these models show that the hepatic vein and an average of the hepatic and portal veins provide information that is best correlate with NIR spectral information, while the portal vein reference measurement provides poorer correlation and accuracy. These results indicate that NIR determination of oxygen saturation in the liver can provide an assessment of liver oxygen utilization.

  8. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    ERIC Educational Resources Information Center

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  9. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    ERIC Educational Resources Information Center

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  10. Unlocking interpretation in near infrared multivariate calibrations by orthogonal partial least squares.

    PubMed

    Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan

    2009-01-01

    Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in

  11. Temperature insensitive prediction of glucose concentration in turbid medium using multivariable calibration based on external parameter orthogonalization

    NASA Astrophysics Data System (ADS)

    Han, Tongshuai; Zhang, Ziyang; Sun, Cuiying; Guo, Chao; Sun, Di; Liu, Jin

    2016-10-01

    The measurement accuracy of non-invasive blood glucose concentration (BGC) sensing with near-infrared spectroscopy is easily affected by the temperature variation in tissue because it would induce an unacceptable spectrum variation and the consequent prediction deviation. We use a multivariable correction method based on external parameter orthogonalization (EPO) to calibrate the spectral data recorded at different temperature values to reduce the spectral variation. The tested medium is a kind of tissue phantom, the Intralipid aqueous solution. The calibration uses a projection matrix to get the orthogonal spectral space to the variable of external parameter, i.e. temperature, and then the useful spectral information relative to glucose concentration has been reserved. Even more, training the projection matrix can be separated to building the calibration matrix for the prediction of glucose concentration as it only uses the representative samples' data with temperature variation. The method presents a lower complexity than modeling a robust prediction matrix, which can be built from comprehensive spectral data involved the all variables both of BGC and temperature. In our test, the calibrated spectra with the same glucose concentration but different temperature values show a significantly improved repeatability. And then the glucose concentration prediction results show a lower root mean squared error of prediction (RMSEP) than that using the robust calibration model, which has considered the two variables. We also discuss the rationality of the representative samples chosen by EPO. This research may be referenced to the temperature calibration for in vivo BGC sensing.

  12. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    NASA Technical Reports Server (NTRS)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  13. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A satellite infrared (IR) technique for estimating rainfall over northern South America is presented. The objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. In this study, we apply the Convective-Stratiform Technique (CST) of Adler and Negri (1988). The parameters of the original technique were re-calibrated using coincident rainfall estimates (Olson et W., 2000) derived from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI) and GOES IR (11 micrometer) observations. Local circulations were found to play a major role in modulating the rainfall and its diurnal cycle. These included land/sea circulations (notably along the northeast Brazilian coast and in the Gulf of Panama), mountain/valley circulations (along the Andes Mountains), and circulations associated with the presence of rivers. This last category was examined in detail along the Amazon R. east of Manaus. There we found an early morning rainfall maximum along the river (5 LT at 58W, 3 LT at 56W). Rainfall avoids the river in the afternoon (12 LT and later), notably at 56 W. The width of the river seems to be generating a land/river circulation which enhances early morning rainfall but inhibits afternoon rainfall. Results are compared to ground-based radar data collected during the Large-Scale Biosphere-Atmosphere (LBA) experiment in southwest Brazil, to monthly raingages in northeastern Brazil, and to data from the TRMM Precipitation Radar.

  14. Optimum Experimental Design applied to MEMS accelerometer calibration for 9-parameter auto-calibration model.

    PubMed

    Ye, Lin; Su, Steven W

    2015-01-01

    Optimum Experimental Design (OED) is an information gathering technique used to estimate parameters, which aims to minimize the variance of parameter estimation and prediction. In this paper, we further investigate an OED for MEMS accelerometer calibration of the 9-parameter auto-calibration model. Based on a linearized 9-parameter accelerometer model, we show the proposed OED is both G-optimal and rotatable, which are the desired properties for the calibration of wearable sensors for which only simple calibration devices are available. The experimental design is carried out with a newly developed wearable health monitoring device and desired experimental results have been achieved.

  15. Calibrating and Evaluating Boomless Spray Systems for Applying Forest Herbicides

    Treesearch

    Michael A. Wehr; Russell W. Johnson; Robert L. Sajdak

    1985-01-01

    Describes a testing procedure used to calibrate and evaluate agricultureal boomless spray systems. Tests allow the user to obtain dependable and satisfactory results when used in actual forest situations.

  16. Importance of prediction outlier diagnostics in determining a successful inter-vendor multivariate calibration model transfer.

    PubMed

    Guenard, Robert D; Wehlburg, Christine M; Pell, Randy J; Haaland, David M

    2007-07-01

    This paper reports on the transfer of calibration models between Fourier transform near-infrared (FT-NIR) instruments from four different manufacturers. The piecewise direct standardization (PDS) method is compared with the new hybrid calibration method known as prediction augmented classical least squares/partial least squares (PACLS/PLS). The success of a calibration transfer experiment is judged by prediction error and by the number of samples that are flagged as outliers that would not have been flagged as such if a complete recalibration were performed. Prediction results must be acceptable and the outlier diagnostics capabilities must be preserved for the transfer to be deemed successful. Previous studies have measured the success of a calibration transfer method by comparing only the prediction performance (e.g., the root mean square error of prediction, RMSEP). However, our study emphasizes the need to consider outlier detection performance as well. As our study illustrates, the RMSEP values for a calibration transfer can be within acceptable range; however, statistical analysis of the spectral residuals can show that differences in outlier performance can vary significantly between competing transfer methods. There was no statistically significant difference in the prediction error between the PDS and PACLS/PLS methods when the same subset sample selection method was used for both methods. However, the PACLS/PLS method was better at preserving the outlier detection capabilities and therefore was judged to have performed better than the PDS algorithm when transferring calibrations with the use of a subset of samples to define the transfer function. The method of sample subset selection was found to make a significant difference in the calibration transfer results using the PDS algorithm, while the transfer results were less sensitive to subset selection when the PACLS/PLS method was used.

  17. Simultaneous spectrophotometric-multivariate calibration determination of several components of ophthalmic solutions: phenylephrine, chloramphenicol, antipyrine, methylparaben and thimerosal.

    PubMed

    Collado, M S; Mantovani, V E; Goicoechea, H C; Olivieri, A C

    2000-08-16

    The use of multivariate spectrophotometric calibration for the simultaneous determination of several active components and excipients in ophthalmic solutions is presented. The resolution of five-component mixtures of phenylephrine, chloramphenicol, antipyrine, methylparaben and thimerosal has been accomplished by using partial least-squares (PLS-1) and a variant of the so-called hybrid linear analysis (HLA). Notwithstanding the presence of a large number of components and their high degree of spectral overlap, they have been determined simultaneously with high accuracy and precision, with no interference, rapidly and without resorting to extraction procedures using non aqueous solvents. A simple and fast method for wavelength selection in the calibration step is presented, based on the minimisation of the predicted error sum of squares (PRESS) calculated as a function of a moving spectral window.

  18. Determination of dexamethasone and two excipients (creatinine and propylparaben) in injections by using UV-spectroscopy and multivariate calibrations.

    PubMed

    Collado, M S; Robles, J C; De Zan, M; Cámara, M S; Mantovani, V E; Goicoechea, H C

    2001-10-23

    The use of multivariate spectrophotometric calibration for the simultaneous determination of dexamethasone and two typical excipients (creatinine and propylparaben) in injections is presented. The resolution of the three-component mixture in a matrix of excipients has been accomplished by using partial least-squares (PLS-1). Notwithstanding the elevated degree of spectral overlap, they have been rapidly and simultaneously determined with high accuracy and precision (comparable to the HPLC pharmacopeial method), with no interference, and without resorting to extraction procedures using non-aqueous solvents. A simple and fast method for wavelength selection in the calibration step is used, based on the minimisation of the predicted error sum of squares (PRESS) calculated as a function of a moving spectral window.

  19. Multivariate calibration for protein, cholesterol and triglycerides in human plasma using short-wave near infrared spectrometry

    NASA Astrophysics Data System (ADS)

    Bittner, A.; Marbach, R.; Heise, H. M.

    1995-04-01

    Recent progress in spectroscopy and chemometrics have brought the reagentless analysis of blood substrates by near infrared spectroscopy into clinical reach. Results for the in-vitro analysis of several blood substrates in human blood plasma using multivariate calibration by partial-least squares are presented for 125 hospital samples. Whereas the relative meansquared prediction error for total protein (1.4 %) using short wave NIR data is comparable with previous results using conventional NIR spectroscopy, the errors found for total cholesterol (6.5 %) and triglycerides (13.8 %) are nearly a factor of two worse for this study.

  20. [Comparison of four multivariate calibration methods in simultaneous determination of air toxic organic compounds with FTIR spectroscopy].

    PubMed

    Li, Yan; Wang, Jun-de; Chen, Zuo-ru; Zhou, Xue-tie; Huang, Zhong-hua

    2002-10-01

    The concentration determination abilities of four multivariate calibration methods--classical least squares (CLS), partial least squares (PLS), kalman filter method (KFM) and artificial neural network (ANN) were compared in this paper. Five air toxic organic compounds--1,3-butadiene, benzene, o-xylen, chlorobenzene, and acrolein--whose FTIR spectra seriously overlap each other were selected to compose the analytical objects. The evaluation criterion was according to the mean prediction error (MPE) and mean relative error (MRE). Results showed that PLS was superior to other methods when treating multicomponent analysis problem, while there was no comparable difference between CLS, KFM and ANN.

  1. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  2. Calibration and investigation of infrared camera systems applying blackbody radiation

    NASA Astrophysics Data System (ADS)

    Hartmann, Juergen; Fischer, Joachim

    2001-03-01

    An experimental facility is presented, which allows calibration and detailed investigation of infrared camera systems. Various blackbodies operating in the temperature range from -60 degree(s)C up to 3000 degree(s)C serve as standard radiation sources, enabling calibration of camera systems in a wide temperature and spectral range with highest accuracy. Quantitative results and precise long-term investigations, especially in detecting climatic trends, require accurate traceability to the International Temperature Scale of 1990 (ITS-90). For the used blackbodies the traceability to ITS- 90 is either realized by standard platinum resistance thermometers (in the temperature range below 962 degree(s)C) or by absolute and relative radiometry (in the temperature range above 962 degree(s)C). This traceability is fundamental for implementation of quality assurance systems and realization of different standardizations, for example according ISO 9000. For investigation of the angular and the temperature resolution our set-up enables minimum resolvable (MRTD) and minimum detectable temperature difference (MDTD) measurements in the various temperature ranges. A collimator system may be used to image the MRTD and MDTD targets to infinity. As internal calibration of infrared camera systems critically depends on the temperature of the surrounding, the calibration and investigation of the cameras is performed in a climate box, which allows a detailed controlling of the environmental parameters like humidity and temperature. Experimental results obtained for different camera systems are presented and discussed.

  3. A new strategy for solving matrix effect in multivariate calibration standard addition data using combination of H-point curve isolation and H-point standard addition methods.

    PubMed

    Afkhami, Abbas; Abbasi-Tarighat, Maryam; Bahram, Morteza; Abdollahi, Hamid

    2008-04-21

    This work presents a new and simple strategy for solving matrix effects using combination of H-point curve isolation method (HPCIM) and H-point standard addition method (HPSAM). The method uses spectrophotometric multivariate calibration data constructed by successive standard addition of an analyte into an unknown matrix. By successive standard addition of the analyte, the concentrations of remaining components (interferents) remain constant and therefore give constant cumulative spectrum for interferents in the unknown mixture. The proposed method firstly extracts such spectrum using H-point curve isolation method and then applies the obtained cumulative interferents spectrum for determination of analyte by H-point standard addition method. In order to evaluate the applicability of the method a simulated as well as several experimental data sets were tested. The method was then applied to the determination of paracetamol in pharmaceutical tablets and copper in urine samples and in a copper alloy.

  4. Determination of amylose content in starch using Raman spectroscopy and multivariate calibration analysis.

    PubMed

    Almeida, Mariana R; Alves, Rafael S; Nascimbem, Laura B L R; Stephani, Rodrigo; Poppi, Ronei J; de Oliveira, Luiz Fernando C

    2010-08-01

    Fourier transform Raman spectroscopy and chemometric tools have been used for exploratory analysis of pure corn and cassava starch samples and mixtures of both starches, as well as for the quantification of amylose content in corn and cassava starch samples. The exploratory analysis using principal component analysis shows that two natural groups of similar samples can be obtained, according to the amylose content, and consequently the botanical origins. The Raman band at 480 cm(-1), assigned to the ring vibration of starches, has the major contribution to the separation of the corn and cassava starch samples. This region was used as a marker to identify the presence of starch in different samples, as well as to characterize amylose and amylopectin. Two calibration models were developed based on partial least squares regression involving pure corn and cassava, and a third model with both starch samples was also built; the results were compared with the results of the standard colorimetric method. The samples were separated into two groups of calibration and validation by employing the Kennard-Stone algorithm and the optimum number of latent variables was chosen by the root mean square error of cross-validation obtained from the calibration set by internal validation (leave one out). The performance of each model was evaluated by the root mean square errors of calibration and prediction, and the results obtained indicate that Fourier transform Raman spectroscopy can be used for rapid determination of apparent amylose in starch samples with prediction errors similar to those of the standard method.

  5. Determination of rice syrup adulterant concentration in honey using three-dimensional fluorescence spectra and multivariate calibrations

    NASA Astrophysics Data System (ADS)

    Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen

    2014-10-01

    To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower ± longan ± buckwheat ± rape) model were achieved as follow: RMSEP = 0.0235 and R = 0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration.

  6. [Input layer self-construction neural network and its use in multivariant calibration of infrared spectra].

    PubMed

    Gao, J B; Hu, X Y; Hu, D C

    2001-12-01

    In order to solve the problems of feature extraction and calibration modelling in the area of quantitatively infrared spectra analysis, an input layer self-constructive neural network (ILSC-NN) is proposed. Before the NN training process, the training data is firstly analyzed and some prior knowledge about the problem is obtained. During the training process, the number of the input neurons is determined adaptively based on the prior knowledge. Meantime, the network parameters are also determined. This algorithm of the NN model helps to increase the efficiency of calibration modelling. The test experiment of quantitative analysis using simulated spectral data showed that this modelling method could not only achieve efficient wavelength selection, but also remarkably reduce the random and non-linear noises.

  7. Simultaneous spectrofluorimetric determination of (acetyl)salicylic acid, codeine and pyridoxine in pharmaceutical preparations using partial least-squares multivariate calibration.

    PubMed

    Martos, N R; Díaz, A M; Navalón, A; De Orbe Payá, I; Capitán Vallvey, L F

    2000-10-01

    A partial least-squares calibration (PLS) method for the simultaneous spectrofluorimetric determination of salicylic acid (SA), codeine (CO) and pyridoxine (PY) is proposed. The determination of SA, CO, and PY has been carried out in mixtures of up to three components by recording the emission fluorescence spectra between 300 and 500 nm (lambda(exc) = 220 nm). Due to the fact of the strong spectral overlap among the excitation and also among the emission spectra of these compounds, a previous separation should be carried out in order to determine them by conventional spectrofluorimetric methodologies. Here, a full-spectrum multivariate calibration PLS method is developed. The experimental calibration matrix was constructed with 14 samples. The concentration ranges considered were 0.1-2.0 (SA), 0.25-3.0 (CO) and 0.10-2.0 (PY) mg x l(-1). The optimum number of factors was selected by using the cross-validation method. The method also allows the simultaneous determination of acetylsalicylic acid (ASA), CO and PY by previous alkaline hydrolysis of ASA to SA. To check the accuracy of the proposed method, it was applied to the determination of these compounds in synthetic mixtures and in pharmaceuticals.

  8. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  9. Polarization properties of FEL lamps as applied to radiometric calibration.

    PubMed

    Voss, Kenneth J; Belmar da Costa, Leonardo

    2016-11-01

    The polarization of the irradiance from several 1000 W FEL lamps was measured between 450 and 900 nm. These lamps are universally used as irradiance calibration standards in radiometric laboratories. The irradiance was polarized between 2.3% and 3.2%, with the polarization axis aligned with the coiled filament, nearly perpendicular to the lamp axis. We have presented a simple model of the filament that explains the degree of polarization and the plane of polarization, based on the polarized emissivity of tungsten, and gives an approximate value for this polarization. While the irradiance is polarized, this polarization does not significantly effect the polarization of the light when reflected from a Spectralon plaque (Labsphere, Inc.). The polarization of these lamps should be considered when these FEL lamps are used to characterize optical instruments, particularly grating spectrometers without polarization scrambling devices.

  10. Determination of the oxidation stability of biodiesel and oils by spectrofluorimetry and multivariate calibration.

    PubMed

    Meira, Marilena; Quintella, Cristina M; Tanajura, Alessandra Dos Santos; da Silva, Humbervânia Reis Gonçalves; Fernando, Jaques D'Erasmo Santos; da Costa Neto, Pedro R; Pepe, Iuri M; Santos, Mariana Andrade; Nascimento, Luciana Lordelo

    2011-07-15

    Oxidation stability is an important quality parameter for biodiesel. In general, the methods used to evaluate the oxidation stability of oils and biodiesels are time-consuming. This work reports the use of spectrofluorimetry, a fast analytical technique, associated with multivariate data analysis as a powerful analytical tool to prediction of the oxidation stability. The prediction of the oxidation stability showed a good agreement with the results obtained by the EN14112 reference method Rancimat. The models presented high correlation (0.99276 and 0.97951) between real and predicted values. The R(2) values of 0.98557 and 0.95943 indicated the accuracy of the models to predict the oxidation stability of soy oil and soy biodiesel, respectively. The residual distribution does not follow a trend with respect to the predicted variables indicating the good quality of the fits.

  11. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for simultaneous quantitation of Moexipril and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Tawakkol, Shereen M.; Farouk, M.; Elaziz, Omar Abd; Hemdan, A.; Shehata, Mostafa A.

    2014-12-01

    Three simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the simultaneous determination of Moexipril (MOX) and Hydrochlorothiazide (HCTZ) in pharmaceutical dosage form. The first method is the new extended ratio subtraction method (EXRSM) coupled to ratio subtraction method (RSM) for determination of both drugs in commercial dosage form. The second and third methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 10-60 and 2-30 for MOX and HCTZ in EXRSM method, respectively, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.

  12. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for simultaneous quantitation of Moexipril and Hydrochlorothiazide.

    PubMed

    Tawakkol, Shereen M; Farouk, M; Elaziz, Omar Abd; Hemdan, A; Shehata, Mostafa A

    2014-12-10

    Three simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the simultaneous determination of Moexipril (MOX) and Hydrochlorothiazide (HCTZ) in pharmaceutical dosage form. The first method is the new extended ratio subtraction method (EXRSM) coupled to ratio subtraction method (RSM) for determination of both drugs in commercial dosage form. The second and third methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 10-60 and 2-30 for MOX and HCTZ in EXRSM method, respectively, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.

  13. Monitoring of technical oils in supercritical CO(2) under continuous flow conditions by NIR spectroscopy and multivariate calibration.

    PubMed

    Bürck, J; Wiegand, G; Roth, S; Mathieu, H; Krämer, K

    2006-02-28

    Metal parts and residues from machining processes are usually polluted with cutting or grinding oil and have to be cleaned before further use. Supercritical carbon dioxide can be used for extraction processes and precision cleaning of metal parts, as developed at Forschungszentrum Karlsruhe. For optimizing and efficiently conducting the extraction process, in-line analysis of oil concentration is desirable. Therefore, a monitoring method using fiber-optic NIR spectroscopy in combination with PLS calibration has been developed. In an earlier paper we have described the instrumental set-up and a calibration model using the model compound squalane in the spectral range of the CH combination bands from 4900 to 4200cm(-1). With this model only poor prediction results were obtained if applied to technical oil samples in supercritical CO(2). In this paper we describe a new calibration model, which was set up for the squalane/carbon dioxide system covering the 323-353K temperature and the 16-35.6MPa pressure range. Here, calibration data in the spectral range from 6100 to 5030cm(-1) have been used. This range includes the 5100cm(-1) CO(2) band of the Fermi triad as well as the hydrocarbon 1st overtone CH stretching bands, where spectral features of oil compounds and squalane are more similar to each other. The root mean-squared error of prediction obtained with this model is 4mgcm(-3) for carbon dioxide and 0.4mgcm(-3) for squalane, respectively. The utilizability of the newly developed PLS calibration model for predicting the oil concentration and CO(2) density of solutions of technical oils in supercritical carbon dioxide has been tested. Three types of "real world" cutting and grinding oil formulations were used in these experiments. The calibration proved to be suitable for determining the technical oil concentration with an error of 1.1mgcm(-3) and the CO(2) density with an error of 6mgcm(-3). Therefore, it seems possible to apply this in-line analytical approach on

  14. Use of multivariate calibration models based on UV-Vis spectra for seawater quality monitoring in Tianjin Bohai Bay, China.

    PubMed

    Liu, Xianhua; Wang, Lili

    2015-01-01

    A series of ultraviolet-visible (UV-Vis) spectra from seawater samples collected from sites along the coastline of Tianjin Bohai Bay in China were subjected to multivariate partial least squares (PLS) regression analysis. Calibration models were developed for monitoring chemical oxygen demand (COD) and concentrations of total organic carbon (TOC). Three different PLS models were developed using the spectra from raw samples (Model-1), diluted samples (Model-2), and diluted and raw samples combined (Model-3). Experimental results showed that: (i) possible nonlinearities in the signal concentration relationships were well accounted for by the multivariate PLS model; (ii) the predicted values of COD and TOC fit the analytical values well; the high correlation coefficients and small root mean squared error of cross-validation (RMSECV) showed that this method can be used for seawater quality monitoring; and (iii) compared with Model-1 and Model-2, Model-3 had the highest coefficient of determination (R2) and the lowest number of latent variables. This latter finding suggests that only large data sets that include data representing different combinations of conditions (i.e., various seawater matrices) will produce stable site-specific regressions. The results of this study illustrate the effectiveness of the proposed method and its potential for use as a seawater quality monitoring technique.

  15. Analysis of Lard in Lipstick Formulation Using FTIR Spectroscopy and Multivariate Calibration: A Comparison of Three Extraction Methods.

    PubMed

    Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul

    2016-01-01

    Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm(-1) with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R(2)) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.

  16. Development and analytical validation of a multivariate calibration method for determination of amoxicillin in suspension formulations by near infrared spectroscopy.

    PubMed

    Silva, Maurício A M; Ferreira, Marcus H; Braga, Jez W B; Sena, Marcelo M

    2012-01-30

    This paper proposes a new method for determination of amoxicillin in pharmaceutical suspension formulations, based on transflectance near infrared (NIR) measurements and partial least squares (PLS) multivariate calibration. A complete methodology was implemented for developing the proposed method, including an experimental design, data preprocessing by using multiple scatter correction (MSC) and outlier detection based on high values of leverage, and X and Y residuals. The best PLS model was obtained with seven latent variables in the range from 40.0 to 65.0 mg mL(-1) of amoxicillin, providing a root mean square error of prediction (RMSEP) of 1.6 mg mL(-1). The method was validated in accordance with Brazilian and international guidelines, through the estimate of figures of merit, such as linearity, precision, accuracy, robustness, selectivity, analytical sensitivity, limits of detection and quantitation, and bias. The results for determinations in four commercial pharmaceutical formulations were in agreement with the official high performance liquid chromatographic (HPLC) method at the 99% confidence level. A pseudo-univariate calibration curve was also obtained based on the net analyte signal (NAS). The proposed chemometric method presented the advantages of rapidity, simplicity, low cost, and no use of solvents, compared to the principal alternative methods based on HPLC.

  17. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE

    PubMed Central

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J.; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2016-01-01

    Alzheimer’s disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance. PMID:27499829

  18. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. UV-Vis Spectrophotometry and Multivariate Calibration Method for Simultaneous Determination of Theophylline, Montelukast and Loratadine in Tablet Preparations and Spiked Human Plasma.

    PubMed

    Hassaninejad-Darzi, Seyed Karim; Samadi-Maybodi, Abdolraouf; Nikou, Seyed Mohsen

    2016-01-01

    Resolution of binary mixtures of theophylline (THEO), montelukast (MKST) and loratadine (LORA) with minimum sample pre-treatment and without analyte separation has been successfully achieved by multivariate spectrophotometric calibration, together with partial least-squares (PLS-1), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-Vis spectra of three compounds. The method of central composite design was used in the ranges of 2-14 and 3-11 mg L(-1) for calibration and validation sets, respectively. The models refinement procedure and their validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 0.173 mg L(-1) for THEO with PCR, 0.187 mg L(-1) for MKST with PLS1 and 0.251 mg L(-1) for LORA with HLA techniques. The limit of detection was obtained 0.03, 0.05 and 0.05 mg L(-1) by PCR model for THEO, MKST and LORA, respectively. The procedure was successfully applied for simultaneous determination of the above compounds in pharmaceutical tablets and human plasma. Notwithstanding the spectral overlapping among three drugs, as well as the intrinsic variability of the latter in unknown samples, the recoveries are excellent.

  20. UV-Vis Spectrophotometry and Multivariate Calibration Method for Simultaneous Determination of Theophylline, Montelukast and Loratadine in Tablet Preparations and Spiked Human Plasma

    PubMed Central

    Hassaninejad-Darzi, Seyed Karim; Samadi-Maybodi, Abdolraouf; Nikou, Seyed Mohsen

    2016-01-01

    Resolution of binary mixtures of theophylline (THEO), montelukast (MKST) and loratadine (LORA) with minimum sample pre-treatment and without analyte separation has been successfully achieved by multivariate spectrophotometric calibration, together with partial least-squares (PLS-1), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV–Vis spectra of three compounds. The method of central composite design was used in the ranges of 2–14 and 3–11 mg L–1 for calibration and validation sets, respectively. The models refinement procedure and their validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 0.173 mg L−1 for THEO with PCR, 0.187 mg L–1 for MKST with PLS1 and 0.251 mg L–1 for LORA with HLA techniques. The limit of detection was obtained 0.03, 0.05 and 0.05 mg L−1 by PCR model for THEO, MKST and LORA, respectively. The procedure was successfully applied for simultaneous determination of the above compounds in pharmaceutical tablets and human plasma. Notwithstanding the spectral overlapping among three drugs, as well as the intrinsic variability of the latter in unknown samples, the recoveries are excellent. PMID:27980573

  1. Multivariate Calibration and Model Integrity for Wood Chemistry Using Fourier Transform Infrared Spectroscopy

    PubMed Central

    Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K.

    2015-01-01

    This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide better predictive capability while PCR exhibited a more precise estimate of loading peaks and suggests that PCR is better for model interpretation of key underlying functional groups. Specifically, when PCR was utilized, an error in peak loading of ±15 cm−1 from the true mean was quantified. Application of the first derivative appeared to assist in improving both PCR and PLS loading precision. Research results identified the wavenumbers important in the prediction of extractives, lignin, cellulose, and hemicellulose and further demonstrated the utility in FT-IR for rapid monitoring of wood chemistry. PMID:26576321

  2. Variable selection in multivariate calibration based on clustering of variable concept.

    PubMed

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Fire impact on forest soils evaluated using near-infrared spectroscopy and multivariate calibration.

    PubMed

    Vergnoux, A; Dupuy, N; Guiliano, M; Vennetier, M; Théraulaz, F; Doumenq, P

    2009-11-15

    The assessment of physico-chemical properties in forest soils affected by fires was evaluated using near infrared reflectance (NIR) spectroscopy coupled with chemometric methods. In order to describe the soil properties, measurements were taken of the total organic carbon on solid phase, the total nitrogen content, the organic carbon and the specific absorbences at 254 and 280 nm of humic substances, organic carbon in humic and fulvic acids, concentrations of NH(4)(+), Ca(2+), Mg(2+), K(+) and phosphorus in addition to NIR spectra. Then, a fire recurrence index was defined and calculated according to the different fires extents affecting soils. This calculation includes the occurrence of fires as well as the time elapsed since the last fire. This study shows that NIR spectroscopy could be considered as a tool for soil monitoring, particularly for the quantitative prediction of the total organic carbon, total nitrogen content, organic carbon in humic substances, concentrations of phosphorus, Mg(2+), Ca(2+) and NH(4)(+) and humic substances UVSA(254). Further validation in this field is necessary however, to try and make successful predictions of K(+), organic carbon in humic and fulvic acids and the humic substances UVSA(280). Moreover, NIR coupled with PLS can also be useful to predict the fire recurrence index in order to determine the spatial variability. Also this method can be used to map more or less burned areas and possibly to apply adequate rehabilitation techniques, like soil litter reconstitution with organic enrichments (industrial composts) or reforestation. Finally, the proposed recurrence index can be considered representative of the state of the soils.

  4. Multi-variable calibration of temperature estimation in individual non-encapsulated thermo liquid crystal micro particles

    NASA Astrophysics Data System (ADS)

    Segura, Rodrigo; Cierpka, Christian; Rossi, Massimiliano; Kähler, Christian J.

    2012-11-01

    An experimental method to track the temperature of individual non-encapsulated thermo-liquid crystal (TLC) particles is presented. TLC thermography has been investigated for several years but the low quality of individual TLC particles, as well as the methods used to relate their color to temperature, has prevented the development of a reliable approach to track their temperature individually. In order to overcome these challenges, a Shirasu Porous Glass (SPG) membrane approach was used to produce an emulsion of stable non-encapsulated TLC micro particles, with a narrower size distribution than that of encapsulated TLC solutions which are commercially available (Segura et al., Microfluid Nanofluid, 2012). On the other hand, a multi-variable calibration approach was used, as opposed to the well known temperature-hue relationship, using the three-components of the HSI color space measured in each particle image. A third degree three-dimensional polynomial was fitted to the color data of thousands of particles to estimate their temperature individually. The method is able to measure individual temperatures over a range exceeding the nominal range of the TLC material, with lower uncertainty than any method used for individual particle thermography reported in the literature. Financial support from the German Research Foundation (DFG), under the Forschergruppe 856 grant program, is gratefully appreciated.

  5. Determination of Leaf Water Content by Visible and Near-Infrared Spectrometry and Multivariate Calibration in Miscanthus

    PubMed Central

    Jin, Xiaoli; Shi, Chunhai; Yu, Chang Yeon; Yamada, Toshihiko; Sacks, Erik J.

    2017-01-01

    Leaf water content is one of the most common physiological parameters limiting efficiency of photosynthesis and biomass productivity in plants including Miscanthus. Therefore, it is of great significance to determine or predict the water content quickly and non-destructively. In this study, we explored the relationship between leaf water content and diffuse reflectance spectra in Miscanthus. Three multivariate calibrations including partial least squares (PLS), least squares support vector machine regression (LSSVR), and radial basis function (RBF) neural network (NN) were developed for the models of leaf water content determination. The non-linear models including RBF_LSSVR and RBF_NN showed higher accuracy than the PLS and Lin_LSSVR models. Moreover, 75 sensitive wavelengths were identified to be closely associated with the leaf water content in Miscanthus. The RBF_LSSVR and RBF_NN models for predicting leaf water content, based on 75 characteristic wavelengths, obtained the high determination coefficients of 0.9838 and 0.9899, respectively. The results indicated the non-linear models were more accurate than the linear models using both wavelength intervals. These results demonstrated that visible and near-infrared (VIS/NIR) spectroscopy combined with RBF_LSSVR or RBF_NN is a useful, non-destructive tool for determinations of the leaf water content in Miscanthus, and thus very helpful for development of drought-resistant varieties in Miscanthus. PMID:28579992

  6. Feasibility study on variety identification of rice vinegars using visible and near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Liu, Fei; He, Yong; Wang, Li

    2008-02-01

    The feasibility of visible and near infrared (Vis/NIR) spectroscopy, in combination with a hybrid multivariate methods of partial least squares (PLS) analysis and BP neural network (BPNN), was investigated to identify the variety of rice vinegars with different internal qualities. Five varieties of rice vinegars were prepared and 225 samples (45 for each variety) were selected randomly for the calibration set, while 75 samples (15 for each variety) for the validation set. After some pretreatments with moving average and standard normal variate (SNV), partial least squares (PLS) analysis was implemented for the extraction of principal components (PCs), which would be used as the inputs of BP neural network (BPNN) according to their accumulative reliabilities. Finally, a PLS-BPNN model with sigmoid transfer function was achieved. The performance was validated by the 75 unknown samples in validation set. The threshold error of prediction was set as +/-0.1 and an excellent precision and recognition ratio of 100% was achieved. Simultaneously, certain effective wavelengths for the identification of varieties were proposed by x-loading weights and regression coefficients. The prediction results indicated that Vis/NIR spectroscopy could be used as a rapid and high precision method for the identification of different varieties of rice vinegars.

  7. Determination of Leaf Water Content by Visible and Near-Infrared Spectrometry and Multivariate Calibration in Miscanthus

    DOE PAGES

    Jin, Xiaoli; Shi, Chunhai; Yu, Chang Yeon; ...

    2017-05-19

    Leaf water content is one of the most common physiological parameters limiting efficiency of photosynthesis and biomass productivity in plants including Miscanthus. Therefore, it is of great significance to determine or predict the water content quickly and non-destructively. In this study, we explored the relationship between leaf water content and diffuse reflectance spectra in Miscanthus. Three multivariate calibrations including partial least squares (PLS), least squares support vector machine regression (LSSVR), and radial basis function (RBF) neural network (NN) were developed for the models of leaf water content determination. The non-linear models including RBF_LSSVR and RBF_NN showed higher accuracy than themore » PLS and Lin_LSSVR models. Moreover, 75 sensitive wavelengths were identified to be closely associated with the leaf water content in Miscanthus. The RBF_LSSVR and RBF_NN models for predicting leaf water content, based on 75 characteristic wavelengths, obtained the high determination coefficients of 0.9838 and 0.9899, respectively. The results indicated the non-linear models were more accurate than the linear models using both wavelength intervals. These results demonstrated that visible and near-infrared (VIS/NIR) spectroscopy combined with RBF_LSSVR or RBF_NN is a useful, non-destructive tool for determinations of the leaf water content in Miscanthus, and thus very helpful for development of drought-resistant varieties in Miscanthus.« less

  8. Multivariate Curve Resolution Applied to Hyperspectral Imaging Analysis of Chocolate Samples.

    PubMed

    Zhang, Xin; de Juan, Anna; Tauler, Romà

    2015-08-01

    This paper shows the application of Raman and infrared hyperspectral imaging combined with multivariate curve resolution (MCR) to the analysis of the constituents of commercial chocolate samples. The combination of different spectral data pretreatment methods allowed decreasing the high fluorescent Raman signal contribution of whey in the investigated chocolate samples. Using equality constraints during MCR analysis, estimations of the pure spectra of the chocolate sample constituents were improved, as well as their relative contributions and their spatial distribution on the analyzed samples. In addition, unknown constituents could be also resolved. White chocolate constituents resolved from Raman hyperspectral image indicate that, at macro scale, sucrose, lactose, fat, and whey constituents were intermixed in particles. Infrared hyperspectral imaging did not suffer from fluorescence and could be applied for white and milk chocolate. As a conclusion of this study, micro-hyperspectral imaging coupled to the MCR method is confirmed to be an appropriate tool for the direct analysis of the constituents of chocolate samples, and by extension, it is proposed for the analysis of other mixture constituents in commercial food samples.

  9. Multivariable control theory applied to hierarchial attitude control for planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III; Russell, D. W.

    1972-01-01

    Multivariable control theory is applied to the design of a hierarchial attitude control system for the CARD space vehicle. The system selected uses reaction control jets (RCJ) and control moment gyros (CMG). The RCJ system uses linear signal mixing and a no-fire region similar to that used on the Skylab program; the y-axis and z-axis systems which are coupled use a sum and difference feedback scheme. The CMG system uses the optimum steering law and the same feedback signals as the RCJ system. When both systems are active the design is such that the torques from each system are never in opposition. A state-space analysis was made of the CMG system to determine the general structure of the input matrices (steering law) and feedback matrices that will decouple the axes. It is shown that the optimum steering law and proportional-plus-rate feedback are special cases. A derivation of the disturbing torques on the space vehicle due to the motion of the on-board television camera is presented. A procedure for computing an upper bound on these torques (given the system parameters) is included.

  10. Multivariate Analyses Applied to Healthy Neurodevelopment in Fetal, Neonatal, and Pediatric MRI

    PubMed Central

    Levman, Jacob; Takahashi, Emi

    2016-01-01

    Multivariate analysis (MVA) is a class of statistical and pattern recognition techniques that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of neurological medical imaging related challenges including the evaluation of healthy brain development, the automated analysis of brain tissues and structures through image segmentation, evaluating the effects of genetic and environmental factors on brain development, evaluating sensory stimulation's relationship with functional brain activity and much more. Compared to adult imaging, pediatric, neonatal and fetal imaging have attracted less attention from MVA researchers, however, recent years have seen remarkable MVA research growth in pre-adult populations. This paper presents the results of a systematic review of the literature focusing on MVA applied to healthy subjects in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in brain MRI, the field is still young and significant research growth will continue into the future. PMID:26834576

  11. Multivariate analyses applied to fetal, neonatal and pediatric MRI of neurodevelopmental disorders

    PubMed Central

    Levman, Jacob; Takahashi, Emi

    2015-01-01

    Multivariate analysis (MVA) is a class of statistical and pattern recognition methods that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of medical neuroimaging-related challenges including identifying variables associated with a measure of clinical importance (i.e. patient outcome), creating diagnostic tests, assisting in characterizing developmental disorders, understanding disease etiology, development and progression, assisting in treatment monitoring and much more. Compared to adults, imaging of developing immature brains has attracted less attention from MVA researchers. However, remarkable MVA research growth has occurred in recent years. This paper presents the results of a systematic review of the literature focusing on MVA technologies applied to neurodevelopmental disorders in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. The goal of this manuscript is to provide a concise review of the state of the scientific literature on studies employing brain MRI and MVA in a pre-adult population. Neurological developmental disorders addressed in the MVA research contained in this review include autism spectrum disorder, attention deficit hyperactivity disorder, epilepsy, schizophrenia and more. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in pediatric/neonatal/fetal brain MRI, the field is still young and considerable research growth remains ahead of us. PMID:26640765

  12. Identification of potential antioxidant compounds in the essential oil of thyme by gas chromatography with mass spectrometry and multivariate calibration techniques.

    PubMed

    Masoum, Saeed; Mehran, Mehdi; Ghaheri, Salehe

    2015-02-01

    Thyme species are used in traditional medicine throughout the world and are known for their antiseptic, antispasmodic, and antitussive properties. Also, antioxidant activity is one of the interesting properties of thyme essential oil. In this research, we aim to identify peaks potentially responsible for the antioxidant activity of thyme oil from chromatographic fingerprints. Therefore, the chemical compositions of hydrodistilled essential oil of thyme species from different regions were analyzed by gas chromatography with mass spectrometry and antioxidant activities of essential oils were measured by a 1,1-diphenyl-2-picrylhydrazyl radical scavenging test. Several linear multivariate calibration techniques with different preprocessing methods were applied to the chromatograms of thyme essential oils to indicate the peaks responsible for the antioxidant activity. These techniques were applied on data both before and after alignment of chromatograms with correlation optimized warping. In this study, orthogonal projection to latent structures model was found to be a good technique to indicate the potential antioxidant active compounds in the thyme oil due to its simplicity and repeatability.

  13. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  14. Partial least squares-based multivariate spectral calibration method for simultaneous determination of beta-carboline derivatives in Peganum harmala seed extracts.

    PubMed

    Hemmateenejad, Bahram; Abbaspour, Abdolkarim; Maghami, Homeyra; Miri, Ramin; Panjehshahin, Mohhamad Reza

    2006-08-11

    The partial least squares regression method has been applied for simultaneous spectrophotometric determination of harmine, harmane, harmalol and harmaline in Peganum harmala L. (Zygophyllaceae) seeds. The effect of pH was optimized employing multivariate definition of selectivity and sensitivity and best results were obtained in basic media (pH>9). The calibration models were optimized for number of latent variables by the cross-validation procedure. Determinations were made over the concentration range of 0.15-10 microg mL(-1). The proposed method was validated by applying it to the analysis of the beta-carbolines in synthetic quaternary mixtures of media at pH 9 and 11. The relative standard errors of prediction were less than 4% in most cases. Analysis of P. harmala seeds by the proposed models for contents of the beta-carboline derivatives resulted in 1.84%, 0.16%, 0.25% and 3.90% for harmine, harmane, harmaline and harmalol, respectively. The results were validated against an existing HPLC method and it no significant differences were observed between the results of two methods.

  15. Updating a near-infrared multivariate calibration model formed with lab-prepared pharmaceutical tablet types to new tablet types in full production.

    PubMed

    Farrell, Jeremy A; Higgins, Kevin; Kalivas, John H

    2012-03-05

    Determining active pharmaceutical ingredient (API) tablet concentrations rapidly and efficiently is of great importance to the pharmaceutical industry in order to assure quality control. Using near-infrared (NIR) spectra measured on tablets in conjunction with multivariate calibration has been shown to meet these objectives. However, the calibration is typically developed under one set of conditions (primary conditions) and new tablets are produced under different measurement conditions (secondary conditions). Hence, the accuracy of multivariate calibration is limited due to differences between primary and secondary conditions such as tablet variances (composition, dosage, and production processes and precision), different instruments, and/or new environmental conditions. This study evaluates application of Tikhonov regularization (TR) to update NIR calibration models developed in a controlled primary laboratory setting to predict API tablet concentrations manufactured in full production where conditions and tablets are significantly different than in the laboratory. With just a few new tablets from full production, it is found that TR provides reduced prediction errors by as much as 64% in one situation compared to no model-updating. TR prediction errors are reduced by as much as 51% compared to local centering, another calibration maintenance method. The TR updated primary models are also found to predict as well as a full calibration model formed in the secondary conditions. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis.

    PubMed

    Tencate, Alister J; Kalivas, John H; White, Alexander J

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  17. Simultaneous detection of trace metal ions in water by solid phase extraction spectroscopy combined with multivariate calibration.

    PubMed

    Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping

    2016-04-15

    Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R(2)) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L. Copyright © 2016. Published by Elsevier B.V.

  18. Simultaneous detection of trace metal ions in water by solid phase extraction spectroscopy combined with multivariate calibration

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping

    2016-04-01

    Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R2) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L.

  19. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  20. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values.

    PubMed

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-05

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A systematic study on the effect of noise and shift on multivariate figures of merit of second-order calibration algorithms.

    PubMed

    Ahmadvand, Mohammad; Parastar, Hadi; Sereshti, Hassan; Olivieri, Alejandro; Tauler, Roma

    2017-02-01

    In the present study, multivariate analytical figures of merit (AFOM) for three well-known second-order calibration algorithms, parallel factor analysis (PARAFAC), PARAFAC2 and multivariate curve resolution-alternating least squares (MCR-ALS), were investigated in simulated hyphenated chromatographic systems including different artifacts (e.g., noise and peak shifts). Different two- and three-component systems with interferences were simulated. Resolved profiles from the target components were used to build calibration curves and to calculate the multivariate AFOMs, sensitivity (SEN), analytical sensitivity (γ), selectivity (SEL) and limit of detection (LOD). The obtained AFOMs for different simulated data sets using different algorithms were used to compare the performance of the algorithms and their calibration ability. Furthermore, phenanthrene and anthracene were analyzed by GC-MS in a mixture of polycyclic aromatic hydrocarbons (PAHs) to confirm the applicability of multivariate AFOMs in real samples. It is concluded that the MCR-ALS method provided the best resolution performance among the tested methods and that more reliable AFOMs were obtained with this method for the studied chromatographic systems with various levels of noise, elution time shifts and presence of unknown interferences.

  2. Uncertainty analysis of the hydro-estimator and the self-calibrating multivariate precipitation retrieval over a mountainous region

    NASA Astrophysics Data System (ADS)

    Akcelik, M.; Yucel, I.; Kuligowski, R. J.

    2011-12-01

    This study investigates the performances of the National Oceanic and Atmospheric Administration / National Environmental Satellite, Data, and Information Service (NOAA/NESDIS) operational rainfall estimation algorithms, called the Hydro-Estimator (HE) and The Self-Calibrating Multivariate Precipitation Retrieval (SCaMPR), in their depiction of the timing, intensity, and duration of rainfall in general and the comparison of the accuracies of the two algorithms over a elevation based scale during both winter and summer seasons. An event-based rainfall observation network in northwest Mexico, established as part of the North American Monsoon Experiment (NAME), provides gauge-based precipitation measurements with sufficient temporal and spatial sampling characteristics to examine the climatological structure of diurnal convective activity over northwest Mexico. In this study, rainfall estimates from the HE and SCAMPR algorithms were evaluated against point observations collected from 49 rain gauges from August through the end of September in 2002, from 76 gauges from August through the end of September in 2003 and from 76 gauges from December 2003 through end of March 2004. While the both of the algorithms provides estimates of the spatial distribution and timing of diurnal convective events, elevation-dependent biases exist in both algorithms, which are characterized by an overestimate in the occurrence of precipitation. This overestimation is much clear at low elevations whereas at high elevations this bias gets smaller to some extend. The findings suggest that continued improvement to the orographic correction scheme is warranted in order to advance quantitative precipitation estimation in complex terrain regions and for use in hydrologic applications.

  3. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  4. Authigenic oxide Neodymium Isotopic composition as a proxy of seawater: applying multivariate statistical analyses.

    NASA Astrophysics Data System (ADS)

    McKinley, C. C.; Scudder, R.; Thomas, D. J.

    2016-12-01

    The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential

  5. Seismic Texture Applied to Well Calibration and Reservoir Property Prediction in the North Central Appalachian Basin

    NASA Astrophysics Data System (ADS)

    Ghosh, Amartya Ghosh

    Enhancing seismic interpretation capabilities often relies on the application of object oriented attributes to better understand subsurface geology. This research intends to extract and calibrate seismic texture attributes with well log data for better characterization of the Marcellus gas shale in north central Appalachian basin. Seismic texture refers to the lateral and vertical variations in reflection amplitude and waveform at a specific sample location in the 3-D seismic domain. Among various texture analysis algorithms, here seismic texture is characterized via an algorithm called waveform model regression utilizing model-derived waveforms for reservoir property calibration. Altering the calibrating waveforms facilitates the conversion of amplitude volumes to purpose-driven texture volumes to be calibrated with well logs for prediction of reservoir properties in untested regions throughout the reservoir. Seismic data calibration is crucial due to the resolution and uncertainty in the interpretation of the data. Because texture is a more unique descriptor of seismic data than amplitude, it provides more statistically and geologically significant correlations to well data. Our new results show that seismic texture is a viable attribute not only for reservoir feature visualization and discrimination, but also for reservoir property calibration and prediction. Comparative analysis indicates that the new results help better define seismic signal properties that are important in predicting the heterogeneity of the unconventional reservoir in the basin. Provisions of this research include a case study applying seismic texture attributes and an assessment of the viability of the attributes to be calibrated with well data from the Marcellus Shale in the north central Appalachian basin. Examples from this study will provide insight in its capabilities in practical applications of seismic texture attributes in unconventional reservoirs in the Appalachian basin and other

  6. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration.

  7. Chemometric methods applied to the calibration of a Vis-NIR sensor for gas engine's condition monitoring.

    PubMed

    Villar, Alberto; Gorritxategi, Eneko; Otaduy, Deitze; Ciria, Jose I; Fernandez, Luis A

    2011-10-31

    This paper describes the calibration process of a Visible-Near Infrared sensor for the condition monitoring of a gas engine's lubricating oil correlating transmittance oil spectra with the degradation of a gas engine's oil via a regression model. Chemometric techniques were applied to determine different parameters: Base Number (BN), Acid Number (AN), insolubles in pentane and viscosity at 40 °C. A Visible-Near Infrared (400-1100 nm) sensor developed in Tekniker research center was used to obtain the spectra of artificial and real gas engine oils. In order to improve sensor's data, different preprocessing methods such as smoothing by Saviztky-Golay, moving average with Multivariate Scatter Correction or Standard Normal Variate to eliminate the scatter effect were applied. A combination of these preprocessing methods was applied to each parameter. The regression models were developed by Partial Least Squares Regression (PLSR). In the end, it was shown that only some models were valid, fulfilling a set of quality requirements. The paper shows which models achieved the established validation requirements and which preprocessing methods perform better. A discussion follows regarding the potential improvement in the robustness of the models.

  8. Application of chemometric tools for coal classification and multivariate calibration by transmission and drift mid-infrared spectroscopy.

    PubMed

    Bona, M T; Andrés, J M

    2008-08-22

    The aim of this paper focuses on the determination of nine coal properties related to combustion power plants (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal kg(-1)), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) by mid-infrared spectroscopy. For that, a wide and diverse coal sample set has been clustered into new homogeneous coal subgroups by the use of hierarchical clustering analysis. This process was performed including property values and spectral data (scores of principal component analysis, PCA) as independent variables. Once the clusters were defined, the corresponding property calibration models were performed by partial least squares regression. Several mathematical pre-treatments were applied to the original spectral data in order to cope with some non-linearities. The accuracy and precision levels for each property were studied. The results revealed that coal properties related to organic components presented relative error values around 2% for some clusters, comparable to those provided by commercial online analysers. Finally, the discrimination level between those groups of samples was evaluated by linear discriminant analysis (LDA). The sensitivity of the system was studied accomplishing percentages close to 100% when the samples were classified attending only to their mid-infrared spectra.

  9. Moors and Christians: an example of multivariate analysis applied to human blood-groups.

    PubMed

    Reyment, R A

    1983-01-01

    Published data on the frequencies of the alleles of the ABO, MNS, and Rh systems for populations in the western Mediterranean region are analysed by the multivariate statistical methods of canonical variates, principal components, principal coordinates, correspondence analysis and discriminant functions. It is shown that there is a 'Moorish substrate' in the eastern and north-eastern parts of Spain and in southern Portugal. Serological effects, such as could derive from the assimilation of a large Jewish population, cannot be identified in the data available. The theory that most Hispano-Moslems and Spanish Jews were of indigenous origin is not gainsaid by the serological data available.

  10. A PID de-tuned method for multivariable systems, applied for HVAC plant

    NASA Astrophysics Data System (ADS)

    Ghazali, A. B.

    2015-09-01

    A simple yet effective de-tuning of PID parameters for multivariable applications has been described. Although the method is felt to have wider application it is simulated in a 3-input/ 2-output building energy management system (BEMS) with known plant dynamics. The controller performances such as the sum output squared error and total energy consumption when the system is at steady state conditions are studied. This tuning methodology can also be extended to reduce the number of PID controllers as well as the control inputs for specified output references that are necessary for effective results, i.e. with good regulation performances being maintained.

  11. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  12. Multivariate statistics applied to the reaction of common bean plants to parasitism by Meloidogyne javanica.

    PubMed

    Santos, L N S; Cabral, P D S; Neves, G A R; Alves, F R; Teixeira, M B; Cunha, F N; Silva, N F

    2017-03-16

    The availability of common bean cultivars tolerant to Meloidogyne javanica is limited in Brazil. Thus, the present study aimed to evaluate the reactions of 33 common bean genotypes (23 landrace, 8 commercial, 1 susceptible standard and 1 resistant standard) to M. javanica, employing multivariate statistics to discriminate the reaction of the genotypes. The experiment was conducted in a greenhouse using a completely randomized design with seven replicates. The seeds were sown in 1-L pots containing autoclaved soil and sand in a 1:1 ratio (v:v). On day 19, after emergence of the seedlings, the plants were treated with inoculum containing 4000 eggs + second-stage juveniles (J2). At 60 days after inoculation, the seedlings were evaluated based on biometric and parasitism-related traits, such as number of galls, final nematode population per root system, reproduction factor, and percent reduction in the reproduction factor of the nematode (%RRF). The data were subjected to analysis of variance using the F-test. The Mahalanobis generalized distance was used to obtain the dissimilarity matrix, and the average linkage between groups was used for clustering. The use of multivariate statistics allowed groups to be separated according to the resistance levels of genotypes, as observed in the %RRF. The landrace genotypes FORT-09, FORT-17, FORT-31, FORT-32, FORT-34 and FORT-36 presented resistance to M. javanica; thus, these genotypes can be considered potential sources of resistance.

  13. A multivariate calibration procedure for UV/VIS spectrometric quantification of organic matter and nitrate in wastewater.

    PubMed

    Langergraber, G; Fleischmann, N; Hofstädter, F

    2003-01-01

    A submersible UV/VIS spectrometer for in-situ real-time measurements is presented. It utilises the UV/VIS range (200-750 nm) for simultaneous measurement of COD, filtered COD, TSS and nitrate with just a single instrument. A global calibration is provided that is valid for typical municipal wastewater compositions. Usually high correlation coefficients can be achieved using this standard setting. By running a local calibration improvements concerning trueness, precision and long term stability of the results can be achieved. The calibration model is built by means of PLS, various validation procedures and outlier tests to reach both high correlation quality and robustness. This paper describes the UV/VIS spectrometer and the calibration procedure.

  14. Laser ablation molecular isotopic spectroscopy (LAMIS) towards the determination of multivariate LODs via PLS calibration model of 10B and 11B Boric acid mixtures

    NASA Astrophysics Data System (ADS)

    Harris, C. D.; Profeta, Luisa T. M.; Akpovo, Codjo A.; Johnson, Lewis; Stowe, Ashley C.

    2017-05-01

    A calibration model was created to illustrate the detection capabilities of laser ablation molecular isotopic spectroscopy (LAMIS) discrimination in isotopic analysis. The sample set contained boric acid pellets that varied in isotopic concentrations of 10B and 11B. Each sample set was interrogated with a Q-switched Nd:YAG ablation laser operating at 532 nm. A minimum of four band heads of the β system B2∑ -> Χ2∑transitions were identified and verified with previous literature on BO molecular emission lines. Isotopic shifts were observed in the spectra for each transition and used as the predictors in the calibration model. The spectra along with their respective 10/11B isotopic ratios were analyzed using Partial Least Squares Regression (PLSR). An IUPAC novel approach for determining a multivariate Limit of Detection (LOD) interval was used to predict the detection of the desired isotopic ratios. The predicted multivariate LOD is dependent on the variation of the instrumental signal and other composites in the calibration model space.

  15. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    PubMed

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  16. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    SciTech Connect

    Tarifeño-Saldivia, Ariel E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  17. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    PubMed Central

    Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  18. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations.

    PubMed

    Kaplan, Jonas T; Man, Kingson; Greening, Steven G

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  19. Simultaneous determination of the colorants sunset yellow FCF and quinoline yellow by solid-phase spectrophotometry using partial least squares multivariate calibration.

    PubMed

    Capitán-Vallvey, L F; Fernández, M D; de Orbe, I; Vilchez, J L; Avidad, R

    1997-04-01

    A method for the simultaneous determination of the colorants Sunset Yellow FCF and Quinoline Yellow using solid-phase spectrophotometry is proposed. The colorants were isolated in Sephadex DEAE A-25 gel at pH 5.0, the gel-colorants system was packed in a 1 mm silica cell and spectra were recorded between 400 and 600 nm against a blank. Statistical results were obtained by partial least squares (PLS) multivariate calibration. The optimized matrix by using the PLS-2 method enables the determination of the colorants in artificial mixtures and commercial soft drinks.

  20. A uniform nonlinearity criterion for rational functions applied to calibration curve and standard addition methods.

    PubMed

    Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz

    2014-12-01

    Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  2. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  3. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  4. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    PubMed

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%).

  5. Characteristics of couples applying for bibliotherapy via different recruitment strategies: a multivariate comparison.

    PubMed

    van Lankveld, J J; Grotjohann, Y; van Lokven, B M; Everaerd, W

    1999-01-01

    This study compared characteristics of couples with different sexual dysfunctions who were recruited for participation in a bibliotherapy program via two routes: in response to media advertisements and through their presence on a waiting list for therapist-administered treatment in an outpatient sexology clinic. Data were collected from 492 subjects (246 couples). Male sexology patients were younger than media-recruited males. However, type of sexual dysfunction accounted for a substantially larger proportion of variance in the demographic and psychometric data. An interaction effect of recruitment strategy and sexual dysfunction type was found with respect to female anorgasmia. We conclude from the absence of differences between the two study groups that the Wills and DePaulo (1991) model of help-seeking behavior for mental problems does not apply to couples with sexual dysfunctions joining a bibliotherapy program who either primarily requested professional treatment or who responded to media advertising.

  6. Tailored Excitation for Multivariable Stability-Margin Measurement Applied to the X-31A Nonlinear Simulation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Burken, John J.

    1997-01-01

    Safety and productivity of the initial flight test phase of a new vehicle have been enhanced by developing the ability to measure the stability margins of the combined control system and vehicle in flight. One shortcoming of performing this analysis is the long duration of the excitation signal required to provide results over a wide frequency range. For flight regimes such as high angle of attack or hypersonic flight, the ability to maintain flight condition for this time duration is difficult. Significantly reducing the required duration of the excitation input is possible by tailoring the input to excite only the frequency range where the lowest stability margin is expected. For a multiple-input/multiple-output system, the inputs can be simultaneously applied to the control effectors by creating each excitation input with a unique set of frequency components. Chirp-Z transformation algorithms can be used to match the analysis of the results to the specific frequencies used in the excitation input. This report discusses the application of a tailored excitation input to a high-fidelity X-31A linear model and nonlinear simulation. Depending on the frequency range, the results indicate the potential to significantly reduce the time required for stability measurement.

  7. Exploration of attenuated total reflectance mid-infrared spectroscopy and multivariate calibration to measure immunoglobulin G in human sera.

    PubMed

    Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton

    2015-09-01

    Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples.

  8. Multivariate analysis applied to monthly rainfall over Rio de Janeiro state, Brazil

    NASA Astrophysics Data System (ADS)

    Brito, Thábata T.; Oliveira-Júnior, José F.; Lyra, Gustavo B.; Gois, Givanildo; Zeri, Marcelo

    2016-10-01

    Spatial and temporal patterns of rainfall were identified over the state of Rio de Janeiro, southeast Brazil. The proximity to the coast and the complex topography create great diversity of rainfall over space and time. The dataset consisted of time series (1967-2013) of monthly rainfall over 100 meteorological stations. Clustering analysis made it possible to divide the stations into six groups (G1, G2, G3, G4, G5 and G6) with similar rainfall spatio-temporal patterns. A linear regression model was applied to a time series and a reference. The reference series was calculated from the average rainfall within a group, using nearby stations with higher correlation (Pearson). Based on t-test (p < 0.05) all stations had a linear spatiotemporal trend. According to the clustering analysis, the first group (G1) contains stations located over the coastal lowlands and also over the ocean facing area of Serra do Mar (Sea ridge), a 1500 km long mountain range over the coastal Southeastern Brazil. The second group (G2) contains stations over all the state, from Serra da Mantiqueira (Mantiqueira Mountains) and Costa Verde (Green coast), to the south, up to stations in the Northern parts of the state. Group 3 (G3) contains stations in the highlands over the state (Serrana region), while group 4 (G4) has stations over the northern areas and the continent-facing side of Serra do Mar. The last two groups were formed with stations around Paraíba River (G5) and the metropolitan area of the city of Rio de Janeiro (G6). The driest months in all regions were June, July and August, while November, December and January were the rainiest months. Sharp transitions occurred when considering monthly accumulated rainfall: from January to February, and from February to March, likely associated with episodes of "veranicos", i.e., periods of 4-15 days of duration with no rainfall.

  9. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  10. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  11. Towards an effective calibration theory for a broadly applied land surface model (VIC)

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano

    2014-05-01

    The Variable Infiltration Capacity (VIC, Liang et al., 1994) model has been used for a broad range of applications, in hydrology as well as in the fields of climate and global change. Despite the attention for the model and its output, calibration is often not performed. To improve the calibration procedures for VIC applied at grid resolutions varying from meso-scale catchments to the 1 km 'hyper'resolution now used in several global modeling studies, the parameters of the model are studied in more detail. An earlier sensitivity analysis study on a selection of parameters of the VIC model by Demaria et al (2007) showed that the model is not or hardly sensitive to many of its parameters. With improved sensitivity analysis methods and computational power, this study focuses on a broader spectrum of parameters and with state of the art methods: both the DELSA sensitivity analysis method (Rakovec et al., 2013) and the ABC-method (Vrugt et al., 2013) will be employed parallel to a single cell VIC model of the Rietholzbach in Switzerland (representative of the 1 km hyperresolution), and a single and multiple-cell VIC model of the meso-scale Thur basin in Switzerland. In the latter case, also routing plays an important role. With critically screening the parameters of the model, it is possible to define a frame work for calibration of the model at multiple scales. References Demaria, E., B. Nijssen, and T. Wagener (2007), Monte Carlo sensitivity analysis of land surface parameters using the Variable Infiltration Capacity model, J. Geophys. Res., 112, D11,113. Liang, X., D. Lettenmaier, E. Wood, and S. Burges (1994), A simple hydrologically based model of land surface water and energy fluxes for general circulation models, J. Geophys. Res., 99 (D7),14,415-14,458. Rakovec, O., M. Hill, M. Clark, A. Weerts, A. Teuling, and R. Uijlenhoet (2013), A new computationally frugal method for sensitivity analysis of environmental models, Water Resour. Res., in press Vrugt, J.A. and M

  12. FT-IR/ATR univariate and multivariate calibration models for in situ monitoring of sugars in complex microalgal culture media.

    PubMed

    Girard, Jean-Michel; Deschênes, Jean-Sébastien; Tremblay, Réjean; Gagnon, Jonathan

    2013-09-01

    The objective of this work is to develop a quick and simple method for the in situ monitoring of sugars in biological cultures. A new technology based on Attenuated Total Reflectance-Fourier Transform Infrared (FT-IR/ATR) spectroscopy in combination with an external light guiding fiber probe was tested, first to build predictive models from solutions of pure sugars, and secondly to use those models to monitor the sugars in the complex culture medium of mixotrophic microalgae. Quantification results from the univariate model were correlated with the total dissolved solids content (R(2)=0.74). A vector normalized multivariate model was used to proportionally quantify the different sugars present in the complex culture medium and showed a predictive accuracy of >90% for sugars representing >20% of the total. This method offers an alternative to conventional sugar monitoring assays and could be used at-line or on-line in commercial scale production systems.

  13. A flow system for generation of concentration perturbation in two-dimensional correlation near-infrared spectroscopy: application to variable selection in multivariate calibration.

    PubMed

    Pereira, Claudete Fernandes; Pasquini, Celio

    2010-05-01

    A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models.

  14. Development of a multivariate calibration model for the determination of dry extract content in Brazilian commercial bee propolis extracts through UV-Vis spectroscopy

    NASA Astrophysics Data System (ADS)

    Barbeira, Paulo J. S.; Paganotti, Rosilene S. N.; Ássimos, Ariane A.

    2013-10-01

    This study had the objective of determining the content of dry extract of commercial alcoholic extracts of bee propolis through Partial Least Squares (PLS) multivariate calibration and electronic spectroscopy. The PLS model provided a good prediction of dry extract content in commercial alcoholic extracts of bee propolis in the range of 2.7 a 16.8% (m/v), presenting the advantage of being less laborious and faster than the traditional gravimetric methodology. The PLS model was optimized with outlier detection tests according to the ASTM E 1655-05. In this study it was possible to verify that a centrifugation stage is extremely important in order to avoid the presence of waxes, resulting in a more accurate model. Around 50% of the analyzed samples presented content of dry extract lower than the value established by Brazilian legislation, in most cases, the values found were different from the values claimed in the product's label.

  15. Direct estimation of dissolved organic carbon using synchronous fluorescence and independent component analysis (ICA): advantages of a multivariate calibration.

    PubMed

    De Almeida Brehm, Franciane; de Azevedo, Julio Cesar R; da Costa Pereira, Jorge; Burrows, Hugh D

    2015-11-01

    Dissolved organic carbon (DOC) is frequently used as a diagnostic parameter for the identification of environmental contamination in aqueous systems. Since this organic matter is evolving and decaying over time. If samples are collected under environmental conditions, some sample stabilization process is needed until the corresponding analysis can be made. This may affect the analysis results. This problem can be avoided using the direct determination of DOC. We report a study using in situ synchronous fluorescence spectra, with independent component analysis to retrieve relevant major spectral contributions and their respective component contributions, for the direct determination of DOC. Fluorescence spectroscopy is a very powerful and sensitive technique to evaluate vestigial organic matter dissolved in water and is thus suited for the analytical task of direct monitoring of dissolved organic matter in water, thus avoiding the need for the stabilization step. We also report the development of an accurate calibration model for dissolved organic carbon determinations using environmental samples of humic and fulvic acids. The method described opens the opportunity for a fast, in locus, DOC estimation in environmental or other field studies using a portable fluorescence spectrometer. This combines the benefits of the use of fresh samples, without the need of stabilizers, and also allows the interpretation of various additional spectral contributions based on their respective estimated properties. We show how independent component analysis may be used to describe tyrosine, tryptophan, humic acid and fulvic acid spectra and, thus, to retrieve the respective individual component contribution to the DOC.

  16. Determination of Propranolol Hydrochloride in Pharmaceutical Preparations Using Near Infrared Spectrometry with Fiber Optic Probe and Multivariate Calibration Methods

    PubMed Central

    Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson

    2015-01-01

    A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g−1 was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm−1). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe. PMID:25861516

  17. Rapid screening for metabolite overproduction in fermentor broths, using pyrolysis mass spectrometry with multivariate calibration and artificial neural networks.

    PubMed

    Goodacre, R; Trew, S; Wrigley-Jones, C; Neal, M J; Maddock, J; Ottley, T W; Porter, N; Kell, D B

    1994-11-20

    Binary mixtures of model systems consisting of the antibiotic ampicillin with either Escherichia coli or Staphylococcus auresu were subjected to pyrolysis mass spectrometry (PyMS). To deconvolute the pyrolysis mass spectra, so as to obtain quantitative information on the concentration of ampicilin in the mixtures, partial least squares regression (PLS), principal components regression (PCR), and fully interconnected feedforward artificial neural networks (ANNs) were studied. In the latter case, the weights were modified using the standard backpropagation algorithm, and the nodes used a sigmoidal squsahing funciton. It was found that each of the methods could be used to provide calibration models which gave excellent predictions for the concentrations of ampicillin in samples on which they had not been trained. Furthermore, ANNs trained to predict the amount of ampicilin in E. coli were able to generalise so as to predict the concentration of ampicillin in a S. aureus background, illustrating the robustness of ANNs to rather substantial variations in the biological background. The PyMS of the complex mixture of ampicilin in bacteria could not be expressed simply in terms of additive combinations of the spectra describing the pure components of the mixtures and their relative concentrations. Intermolecular reactions took place in the pyrolysate, leading to a lack of superposition of the spectral components and to a dependence of the normalized mass spectrum on sample size. Samples from fermentations of a single organism in a complex production medium were also analyzed quantitatively for a drug of commercial interest. The drug could also be quantified in a variety of mutant-producing strains cultivated in the same medium. The combination of PyMS and ANNs constitutes a novel, rapid, and convenient method for exploitation in strain improvement screening programs. (c) 1994 John Wiley & Sons, Inc.

  18. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data.

    PubMed

    Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A

    2017-04-01

    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.

  19. Zoom lens calibration with zoom- and focus-related intrinsic parameters applied to bundle adjustment

    NASA Astrophysics Data System (ADS)

    Zheng, Shunyi; Wang, Zheng; Huang, Rongyong

    2015-04-01

    A zoom lens is more flexible for photogrammetric measurements under diverse environments than a fixed lens. However, challenges in calibration of zoom-lens cameras preclude the wide use of zoom lenses in the field of close-range photogrammetry. Thus, a novel zoom lens calibration method is proposed in this study. In this method, instead of conducting modeling after monofocal calibrations, we summarize the empirical zoom/focus models of intrinsic parameters first and then incorporate these parameters into traditional collinearity equations to construct the fundamental mathematical model, i.e., collinearity equations with zoom- and focus-related intrinsic parameters. Similar to monofocal calibration, images taken at several combinations of zoom and focus settings are processed in a single self-calibration bundle adjustment. In the self-calibration bundle adjustment, three types of unknowns, namely, exterior orientation parameters, unknown space point coordinates, and model coefficients of the intrinsic parameters, are solved simultaneously. Experiments on three different digital cameras with zoom lenses support the feasibility of the proposed method, and their relative accuracies range from 1:4000 to 1:15,100. Furthermore, the nominal focal length written in the exchangeable image file header is found to lack reliability in experiments. Thereafter, the joint influence of zoom lens instability and zoom recording errors is further analyzed quantitatively. The analysis result is consistent with the experimental result and explains the reason why zoom lens calibration can never have the same accuracy as monofocal self-calibration.

  20. A novel calibration method for an infrared thermography system applied to heat transfer experiments

    NASA Astrophysics Data System (ADS)

    Ochs, M.; Horbach, T.; Schulz, A.; Koch, R.; Bauer, H.-J.

    2009-07-01

    In heat transfer measurements with highly non-uniform wall heat fluxes, high spatial resolution of wall temperatures is required to fully capture the complex thermal situation. Infrared thermography systems provide that spatial resolution. To meet the thermal accuracy, they are usually calibrated in situ using thermocouples embedded in the test surface, which have to cover the complete temperature range of interest. However, thermocouples which are placed in regions of high temperature and heat flux gradients often cannot be used for the calibration and the overall accuracy of the calibration decreases significantly. Therefore, in the present work a novel in situ calibration method is presented which does not require thermocouples over the complete surface temperature range. The number of free parameters of the calibration function is reduced by an optimized insensitivity of the system with respect to changes in operating conditions. Reference measurements demonstrate the advantages of the new method.

  1. Simultaneous determination of vitamin B12 and its derivatives using some of multivariate calibration 1 (MVC1) techniques

    NASA Astrophysics Data System (ADS)

    Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad

    2008-10-01

    Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.

  2. Multivariate calibration by near infrared spectroscopy for the determination of the vitamin E and the antioxidant properties of quinoa.

    PubMed

    Moncada, Guillermo Wells; González Martín, Ma Inmaculada; Escuredo, Olga; Fischer, Susana; Míguez, Montserrat

    2013-11-15

    Quinoa is a pseudocereal that is grown mainly in the Andes. It is a functional food supplement and ingredient in the preparation of highly nutritious food. In this paper we evaluate the potential of near infrared spectroscopy (NIR) for the determination of vitamin E and antioxidant capacity in the quinoa as total phenol content (TPC), radical scavenging activity by DPPH (2,2-diphenyl-2-picryl-hydrazyl) and cupric reducing antioxidant capacity (CUPRAC) expressed as gallic acid equivalent (GAE). For recording NIR a fiber optic remote reflectance probe applied directly on the quinoa samples without treatment was used. The regression method used was modified partial least squares (MPLS). The multiple correlation coefficients (RSQ) and the standard prediction error corrected (SEP(C)) were for the vitamin E (0.841 and 1.70 mg 100 g(-1)) and for the antioxidants TPC (0.947 and 0.08 mg GAE g(-1)), DPPH radical (0.952 and 0.23 mg GAE g(-1)) and CUPRAC ( 0.623 and 0.21 mg GAE g(-1)), respectively. The prediction capacity of the model developed measured by the ratio performance deviation (RPD) for vitamin E (2.51), antioxidants TPC (4.33), DPPH radical (4.55) and CUPRAC (1.55) indicated that NIRS with a fiber optic probe provides an alternative for the determination of vitamin E and antioxidant properties of the quinoa, with a lower cost, higher speed and results comparable with the chemical methods. © 2013 Elsevier B.V. All rights reserved.

  3. Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) with Raman Imaging Applied to Lunar Meteorites.

    PubMed

    Smith, Joseph P; Smith, Frank C; Booksh, Karl S

    2017-01-01

    Lunar meteorites provide a more random sampling of the surface of the Moon than do the returned lunar samples, and they provide valuable information to help estimate the chemical composition of the lunar crust, the lunar mantle, and the bulk Moon. As of July 2014, ∼96 lunar meteorites had been documented and ten of these are unbrecciated mare basalts. Using Raman imaging with multivariate curve resolution-alternating least squares (MCR-ALS), we investigated portions of polished thin sections of paired, unbrecciated, mare-basalt lunar meteorites that had been collected from the LaPaz Icefield (LAP) of Antarctica-LAP 02205 and LAP 04841. Polarized light microscopy displays that both meteorites are heterogeneous and consist of polydispersed sized and shaped particles of varying chemical composition. For two distinct probed areas within each meteorite, the individual chemical species and associated chemical maps were elucidated using MCR-ALS applied to Raman hyperspectral images. For LAP 02205, spatially and spectrally resolved clinopyroxene, ilmenite, substrate-adhesive epoxy, and diamond polish were observed within the probed areas. Similarly, for LAP 04841, spatially resolved chemical images with corresponding resolved Raman spectra of clinopyroxene, troilite, a high-temperature polymorph of anorthite, substrate-adhesive epoxy, and diamond polish were generated. In both LAP 02205 and LAP 04841, substrate-adhesive epoxy and diamond polish were more readily observed within fractures/veinlet features. Spectrally diverse clinopyroxenes were resolved in LAP 04841. Factors that allow these resolved clinopyroxenes to be differentiated include crystal orientation, spatially distinct chemical zoning of pyroxene crystals, and/or chemical and molecular composition. The minerals identified using this analytical methodology-clinopyroxene, anorthite, ilmenite, and troilite-are consistent with the results of previous studies of the two meteorites using electron microprobe

  4. Calibration and uncertainty issues of a hydrological model (SWAT) applied to West Africa

    NASA Astrophysics Data System (ADS)

    Schuol, J.; Abbaspour, K. C.

    2006-09-01

    Distributed hydrological models like SWAT (Soil and Water Assessment Tool) are often highly over-parameterized, making parameter specification and parameter estimation inevitable steps in model calibration. Manual calibration is almost infeasible due to the complexity of large-scale models with many objectives. Therefore we used a multi-site semi-automated inverse modelling routine (SUFI-2) for calibration and uncertainty analysis. Nevertheless, the question of when a model is sufficiently calibrated remains open, and requires a project dependent definition. Due to the non-uniqueness of effective parameter sets, parameter calibration and prediction uncertainty of a model are intimately related. We address some calibration and uncertainty issues using SWAT to model a four million km2 area in West Africa, including mainly the basins of the river Niger, Volta and Senegal. This model is a case study in a larger project with the goal of quantifying the amount of global country-based available freshwater. Annual and monthly simulations with the "calibrated" model for West Africa show promising results in respect of the freshwater quantification but also point out the importance of evaluating the conceptual model uncertainty as well as the parameter uncertainty.

  5. Reflectance near-infrared spectroscopic method with a chemometric technique using partial least squares multivariate calibration for simultaneous determination of chondroitin, glucosamine, and ascorbic acid.

    PubMed

    El-Gindy, Alaa; Attia, Khalid Abdel-Salam; Nassar, Mohammad Wafaa; El-Abasawy, Nasr M A; Shoeib, Maisra Al-shabrawi

    2012-01-01

    A reflectance near-infrared (RNIR) spectroscopy method was developed for simultaneous determination of chondroitin (CH), glucosamine (GO), and ascorbic acid (AS) in capsule powder. A simple preparation of the sample was done by grinding, sieving, and compression of the powder sample for improving RNIR spectra. Partial least squares (PLS-1 and PLS-2) was successfully applied to quantify the three components in the studied mixture using information included in RNIR spectra in the 4240-9780 cm(-1) range. The calibration model was developed with the three drug concentrations ranging from 50 to 150% of the labeled amount. The calibration models using pure standards were evaluated by internal validation, cross-validation, and external validation using synthetic and pharmaceutical preparations. The proposed method was applied for analysis of two pharmaceutical products. Both pharmaceutical products had the same active principle and similar excipients, but with different nominal concentration values. The results of the proposed method were compared with the results of a pharmacopoeial method for the same pharmaceutical products. No significant differences between the results were found. The standard error of prediction was 0.004 for CH, 0.003 for GO, and 0.005 for AS. The correlation coefficient was 0.9998 for CH, 0.9999 for GO, and 0.9997 for AS. The highly accurate and precise RNIR method can be used for QC of pharmaceutical products.

  6. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for quantitation of Benazepril alone and in combination with Amlodipine.

    PubMed

    Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A

    2014-04-05

    Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.

  7. Fast screening method for determining 2,4,6-trichloroanisole in wines using a headspace-mass spectrometry (HS-MS) system and multivariate calibration.

    PubMed

    Martí, M P; Boqué, R; Riu, M; Busto, O; Guasch, J

    2003-06-01

    The system based on coupling a headspace sampler to a mass spectrometer (HS-MS), which is considered one kind of electronic nose, is an emergent technique for ensuring and controlling quality in industry. It involves injecting the headspace of the sample into the ionization chamber of the mass spectrometer where the analytes are fragmented. The result is a complex mass spectrum for each sample analyzed. When several samples are analyzed the data matrix generated is processed with chemometric techniques to compare and classify the substances from their volatile composition, in other words, to compare and classify their flavor. So far, information from electronic nose applications has mainly been qualitative. In this paper we present a quantitative study that uses a multivariate calibration. We analyzed several white wines using HS-MS to determine 2,4,6-tricholoranisole (TCA). This is an off-flavor that is a serious problem for the wine industry. The method is simple because it does not require sample preparation, only addition of sodium chloride being necessary for sample conditioning. Also, it provides a fast screening (10 min/sample) of the quantity of TCA in wines at ultratrace (sub microg L(-1)) levels.

  8. A non-linearity criterion applied to the calibration curve method involved with ion-selective electrodes.

    PubMed

    Michałowski, Tadeusz; Pilarski, Bogusław; Michałowska-Kaczmarczyk, Anna M; Kukwa, Agata

    2014-06-01

    Some rational functions of the Padé type, y=y(x; n,m), were applied to the calibration curve method (CCM), and compared with a parabolic function. The functions were tested on the results obtained from calibration of ion-selective electrodes: NH4-ISE, Ca-ISE, and F-ISE. A validity of the functions y=y(x; 2,1), y=y(x; 1,1), and y=y(x; 2,0) (parabolic) was compared. A uniform, integral criterion of nonlinearity of calibration curves is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Practical application of electromyogram radiotelemetry: the suitability of applying laboratory-acquired calibration data to field data

    SciTech Connect

    Geist, David R. ); Brown, Richard S.; Lepla, Ken; Chandler, James P.

    2001-12-01

    One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groups of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.

  10. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    NASA Astrophysics Data System (ADS)

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E.; Pavez, Cristian; Soto, Leopoldo

    2015-03-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3He-filled tube is obtained by using this methodology with respect to previous calibration methods.

  11. Multivariate or Multivariable Regression?

    PubMed Central

    Goodman, Melody

    2013-01-01

    The terms multivariate and multivariable are often used interchangeably in the public health literature. However, these terms actually represent 2 very distinct types of analyses. We define the 2 types of analysis and assess the prevalence of use of the statistical term multivariate in a 1-year span of articles published in the American Journal of Public Health. Our goal is to make a clear distinction and to identify the nuances that make these types of analyses so distinct from one another. PMID:23153131

  12. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  13. Practical color calibration for dermoscopy, applied to a digital epiluminescence microscope.

    PubMed

    Grana, C; Pellacani, G; Seidenari, S

    2005-11-01

    The assessment of colors is essential for melanoma (MM) diagnosis, both for pattern analysis on dermoscopic images, and when using semiquantitative methods. Our aim was to provide a simple, precise characterization and reproducible calibration of the color response for dermoscopic instruments. Three processes were used to correct the non-uniform illumination pattern of the instrument, to easily estimate the camera gamma settings and to describe the color space conversion matrices required to produce standard images, in any color space. A specific color space was also developed to optimize the representation of dermatoscopic colors. The calibration technique was tested both on synthetic reference surfaces and on real images by comparing the difference between the images colors obtained with two different equipments. The differences between the images acquired by means of the two instruments, calculated on the reference patterns after calibration, were up to 10 times lower then before, while comparison of histograms referring to real images provided an improvement of about seven times on average. A complete workflow for dermatologic image calibration, which allows the user to continue using his own software and algorithms, but with a much higher informative content, is presented. The technique is simple and may improve cooperation between different research centers, in teleconsulting contexts or for result comparisons.

  14. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  15. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  16. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  17. Calibrated histochemistry applied to oxygen supply and demand in hypertrophied rat myocardium.

    PubMed

    Des Tombe, A L; Van Beek-Harmsen, B J; Lee-De Groot, M B E; Van Der Laarse, W J

    2002-09-01

    Oxygen supply and demand of individual cardiomyocytes during the development of myocardial hypertrophy is studied using calibrated histochemical methods. An oxygen diffusion model is used to calculate the critical extracellular oxygen tension (PO(2,crit)) required by cardiomyocytes to prevent hypoxia during hypertrophic growth, and determinants of PO(2,crit) are estimated using calibrated histochemical methods for succinate dehydrogenase activity, cardiomyocyte cross-sectional area, and myoglobin concentration. The model calculation demonstrates that it is essential to calibrate the histochemical methods, so that absolute values for the relevant parameters are obtained. The succinate dehydrogenase activity, which is proportional to the maximum rate of oxygen consumption, and the myoglobin concentration hardly change while the cardiomyocytes grow. The cross-sectional area of the cardiomyocytes, which increases up to threefold in the right ventricular wall due to pulmonary hypertension in monocrotaline-treated rats, is the most important determinant of PO(2,crit) in this model of myocardial hypertrophy. The relationship between oxygen supply and demand at the level of the cardiomyocyte can be investigated using paired determinations of spatially integrated succinate dehydrogenase activity and capillary density. Hypoxia-inducible factor 1alpha can be demonstrated by immunohistochemistry in cardiomyocytes with high PO(2,crit) and increased spatially integrated succinate dehydrogenase activity, indicating that limited oxygen supply affects gene expression in these cells. We conclude that a mismatch of oxygen supply and demand may develop during hypertrophic growth, which can play a role in the transition from myocardial hypertrophy to heart failure. Copyright 2002 Wiley-Liss, Inc.

  18. A hybrid clustering approach for multivariate time series - A case study applied to failure analysis in a gas turbine.

    PubMed

    Fontes, Cristiano Hora; Budman, Hector

    2017-09-16

    A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Determination of tartrazine in beverage samples by stopped-flow analysis and three-way multivariate calibration of non-linear kinetic-spectrophotometric data.

    PubMed

    Schenone, Agustina V; Culzoni, María J; Marsili, Nilda R; Goicoechea, Héctor C

    2013-06-01

    The performance of MCR-ALS was studied in the modeling of non-linear kinetic-spectrophotometric data acquired by a stopped-flow system for the quantitation of tartrazine in the presence of brilliant blue and sunset yellow FCF as possible interferents. In the present work, MCR-ALS and U-PCA/RBL were firstly applied to remove the contribution of unexpected components not included in the calibration set. Secondly, a polynomial function was used to model the non-linear data obtained by the implementation of the algorithms. MCR-ALS was the only strategy that allowed the determination of tartrazine in test samples accurately. Therefore, it was applied for the analysis of tartrazine in beverage samples with minimum sample preparation and short analysis time. The proposed method was validated by comparison with a chromatographic procedure published in the literature. Mean recovery values between 98% and 100% and relative errors of prediction values between 4% and 9% were indicative of the good performance of the method. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Dead-blow hammer design applied to a calibration target mechanism to dampen excessive rebound

    NASA Technical Reports Server (NTRS)

    Lim, Brian Y.

    1991-01-01

    An existing rotary electromagnetic driver was specified to be used to deploy and restow a blackbody calibration target inside of a spacecraft infrared science instrument. However, this target was much more massive than any other previously inherited design applications. The target experienced unacceptable bounce when reaching its stops. Without any design modification, the momentum generated by the driver caused the target to bounce back to its starting position. Initially, elastomeric dampers were used between the driver and the target. However, this design could not prevent the bounce, and it compromised the positional accuracy of the calibration target. A design that successfully met all the requirements incorporated a sealed pocket 85 percent full of 0.75 mm diameter stainless steel balls in the back of the target to provide the effect of a dead-blow hammer. The energy dissipation resulting from the collision of balls in the pocket successfully dampened the excess momentum generated during the target deployment. The disastrous effects of new requirements on a design with a successful flight history, the modifications that were necessary to make the device work, and the tests performed to verify its functionality are described.

  1. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to

  2. Multivariate temporal pattern analysis applied to the study of rat behavior in the elevated plus maze: methodological and conceptual highlights.

    PubMed

    Casarrubea, M; Magnusson, M S; Roy, V; Arabo, A; Sorbera, F; Santangelo, A; Faulisi, F; Crescimanno, G

    2014-08-30

    Aim of this article is to illustrate the application of a multivariate approach known as t-pattern analysis in the study of rat behavior in elevated plus maze. By means of this multivariate approach, significant relationships among behavioral events in the course of time can be described. Both quantitative and t-pattern analyses were utilized to analyze data obtained from fifteen male Wistar rats following a trial 1-trial 2 protocol. In trial 2, in comparison with the initial exposure, mean occurrences of behavioral elements performed in protected zones of the maze showed a significant increase counterbalanced by a significant decrease of mean occurrences of behavioral elements in unprotected zones. Multivariate t-pattern analysis, in trial 1, revealed the presence of 134 t-patterns of different composition. In trial 2, the temporal structure of behavior become more simple, being present only 32 different t-patterns. Behavioral strings and stripes (i.e. graphical representation of each t-pattern onset) of all t-patterns were presented both for trial 1 and trial 2 as well. Finally, percent distributions in the three zones of the maze show a clear-cut increase of t-patterns in closed arm and a significant reduction in the remaining zones. Results show that previous experience deeply modifies the temporal structure of rat behavior in the elevated plus maze. In addition, this article, by highlighting several conceptual, methodological and illustrative aspects on the utilization of t-pattern analysis, could represent a useful background to employ such a refined approach in the study of rat behavior in elevated plus maze.

  3. Sequential injection kinetic spectrophotometric determination of quaternary mixtures of carbamate pesticides in water and fruit samples using artificial neural networks for multivariate calibration

    NASA Astrophysics Data System (ADS)

    Chu, Ning; Fan, Shihua

    2009-12-01

    A new analytical method was developed for the simultaneous kinetic spectrophotometric determination of a quaternary carbamate pesticide mixture consisting of carbofuran, propoxur, metolcarb and fenobucarb using sequential injection analysis (SIA). The procedure was based upon the different kinetic properties between the analytes reacted with reagent in flow system in the non-stopped-flow mode, in which their hydrolysis products coupled with diazotized p-nitroaniline in an alkaline medium to form the corresponding colored complexes. The absorbance data from SIA peak time profile were recorded at 510 nm and resolved by the use of back-propagation-artificial neural network (BP-ANN) algorithms for multivariate quantitative analysis. The experimental variables and main network parameters were optimized and each of the pesticides could be determined in the concentration range of 0.5-10.0 μg mL -1, at a sampling frequency of 18 h -1. The proposed method was compared to other spectrophotometric methods for simultaneous determination of mixtures of carbamate pesticides, and it was proved to be adequately reliable and was successfully applied to the simultaneous determination of the four pesticide residues in water and fruit samples, obtaining the satisfactory results based on recovery studies (84.7-116.0%).

  4. Applying Multivariate Clustering Techniques to Health Data: The 4 Types of Healthcare Utilization in the Paris Metropolitan Area

    PubMed Central

    Lefèvre, Thomas; Rondet, Claire; Parizot, Isabelle; Chauvin, Pierre

    2014-01-01

    Background Cost containment policies and the need to satisfy patients’ health needs and care expectations provide major challenges to healthcare systems. Identification of homogeneous groups in terms of healthcare utilisation could lead to a better understanding of how to adjust healthcare provision to society and patient needs. Methods This study used data from the third wave of the SIRS cohort study, a representative, population-based, socio-epidemiological study set up in 2005 in the Paris metropolitan area, France. The data were analysed using a cross-sectional design. In 2010, 3000 individuals were interviewed in their homes. Non-conventional multivariate clustering techniques were used to determine homogeneous user groups in data. Multinomial models assessed a wide range of potential associations between user characteristics and their pattern of healthcare utilisation. Results We identified four distinct patterns of healthcare use. Patterns of consumption and the socio-demographic characteristics of users differed qualitatively and quantitatively between these four profiles. Extensive and intensive use by older, wealthier and unhealthier people contrasted with narrow and parsimonious use by younger, socially deprived people and immigrants. Rare, intermittent use by young healthy men contrasted with regular targeted use by healthy and wealthy women. Conclusion The use of an original technique of massive multivariate analysis allowed us to characterise different types of healthcare users, both in terms of resource utilisation and socio-demographic variables. This method would merit replication in different populations and healthcare systems. PMID:25506916

  5. Applying multivariate clustering techniques to health data: the 4 types of healthcare utilization in the Paris metropolitan area.

    PubMed

    Lefèvre, Thomas; Rondet, Claire; Parizot, Isabelle; Chauvin, Pierre

    2014-01-01

    Cost containment policies and the need to satisfy patients' health needs and care expectations provide major challenges to healthcare systems. Identification of homogeneous groups in terms of healthcare utilisation could lead to a better understanding of how to adjust healthcare provision to society and patient needs. This study used data from the third wave of the SIRS cohort study, a representative, population-based, socio-epidemiological study set up in 2005 in the Paris metropolitan area, France. The data were analysed using a cross-sectional design. In 2010, 3000 individuals were interviewed in their homes. Non-conventional multivariate clustering techniques were used to determine homogeneous user groups in data. Multinomial models assessed a wide range of potential associations between user characteristics and their pattern of healthcare utilisation. We identified four distinct patterns of healthcare use. Patterns of consumption and the socio-demographic characteristics of users differed qualitatively and quantitatively between these four profiles. Extensive and intensive use by older, wealthier and unhealthier people contrasted with narrow and parsimonious use by younger, socially deprived people and immigrants. Rare, intermittent use by young healthy men contrasted with regular targeted use by healthy and wealthy women. The use of an original technique of massive multivariate analysis allowed us to characterise different types of healthcare users, both in terms of resource utilisation and socio-demographic variables. This method would merit replication in different populations and healthcare systems.

  6. Comparative study for determination of some polycyclic aromatic hydrocarbons ‘PAHs' by a new spectrophotometric method and multivariate calibration coupled with dispersive liquid-liquid extraction

    NASA Astrophysics Data System (ADS)

    Abdel-Aziz, Omar; El Kosasy, A. M.; El-Sayed Okeil, S. M.

    2014-12-01

    A modified dispersive liquid-liquid extraction (DLLE) procedure coupled with spectrophotometric techniques was adopted for simultaneous determination of naphthalene, anthracene, benzo(a)pyrene, alpha-naphthol and beta-naphthol in water samples. Two different methods were used, partial least-squares (PLS) method and a new derivative ratio method, namely extended derivative ratio (EDR). A PLS-2 model was established for simultaneous determination of the studied pollutants in methanol, by using twenty mixtures as calibration set and five mixtures as validation set. Also, in methanol a novel (EDR) method was developed for determination of the studied pollutants, where each component in the mixture of the five PAHs was determined by using a mixture of the other four components as divisor. Chemometric and EDR methods could be also adopted for determination of the studied PAH in water samples after transferring them from aqueous medium to the organic one by utilizing dispersive liquid-liquid extraction technique, where different parameters were investigated using a full factorial design. Both methods were compared and the proposed method was validated according to ICH guidelines and successfully applied to determine these PAHs simultaneously in spiked water samples, where satisfactory results were obtained. All the results obtained agreed with those of published methods, where no significant difference was observed.

  7. Comparative study for determination of some polycyclic aromatic hydrocarbons 'PAHs' by a new spectrophotometric method and multivariate calibration coupled with dispersive liquid-liquid extraction.

    PubMed

    Abdel-Aziz, Omar; El Kosasy, A M; El-Sayed Okeil, S M

    2014-12-10

    A modified dispersive liquid-liquid extraction (DLLE) procedure coupled with spectrophotometric techniques was adopted for simultaneous determination of naphthalene, anthracene, benzo(a)pyrene, alpha-naphthol and beta-naphthol in water samples. Two different methods were used, partial least-squares (PLS) method and a new derivative ratio method, namely extended derivative ratio (EDR). A PLS-2 model was established for simultaneous determination of the studied pollutants in methanol, by using twenty mixtures as calibration set and five mixtures as validation set. Also, in methanol a novel (EDR) method was developed for determination of the studied pollutants, where each component in the mixture of the five PAHs was determined by using a mixture of the other four components as divisor. Chemometric and EDR methods could be also adopted for determination of the studied PAH in water samples after transferring them from aqueous medium to the organic one by utilizing dispersive liquid-liquid extraction technique, where different parameters were investigated using a full factorial design. Both methods were compared and the proposed method was validated according to ICH guidelines and successfully applied to determine these PAHs simultaneously in spiked water samples, where satisfactory results were obtained. All the results obtained agreed with those of published methods, where no significant difference was observed.

  8. Near-infrared spectroscopy quantitative determination of Pefloxacin mesylate concentration in pharmaceuticals by using partial least squares and principal component regression multivariate calibration

    NASA Astrophysics Data System (ADS)

    Xie, Yunfei; Song, Yan; Zhang, Yong; Zhao, Bing

    2010-05-01

    Pefloxacin mesylate, a broad-spectrum antibacterial fluoroquinolone, has been widely used in clinical practice. Therefore, it is very important to detect the concentration of Pefloxacin mesylate. In this research, the near-infrared spectroscopy (NIRS) has been applied to quantitatively analyze on 108 injection samples, which was divided into a calibration set containing 89 samples and a prediction set containing 19 samples randomly. In order to get a satisfying result, partial least square (PLS) regression and principal components regression (PCR) have been utilized to establish quantitative models. Also, the process of establishing the models, parameters of the models, and prediction results were discussed in detail. In the PLS regression, the values of the coefficient of determination ( R2) and root mean square error of cross-validation (RMSECV) of PLS regression are 0.9263 and 0.00119, respectively. For comparison, though applying PCR method to get the values of R2 and RMSECV we obtained are 0.9685 and 0.00108, respectively. And the values of the standard error of prediction set (SEP) of PLS and PCR models are 0.001480 and 0.001140. The result of the prediction set suggests that these two quantitative analysis models have excellent generalization ability and prediction precision. However, for this PFLX injection samples, the PCR quantitative analysis model achieved more accurate results than the PLS model. The experimental results showed that NIRS together with PCR method provide rapid and accurate quantitative analysis of PFLX injection samples. Moreover, this study supplied technical support for the further analysis of other injection samples in pharmaceuticals.

  9. EyeSys corneal topography measurement applied to calibrated ellipsoidal convex surfaces.

    PubMed Central

    Douthwaite, W A

    1995-01-01

    AIMS/BACKGROUND--This study was carried out to assess the accuracy of the EyeSys videokeratoscope by using convex ellipsoidal surfaces of known form. METHODS--PMMA convex ellipsoidal buttons were calibrated using Form Talysurf analysis which allowed subsequent calculation of the vertex radius and p value of the surface. The EyeSys videokeratoscope was used to examine the same ellipsoids. The tabular data provided by the instrument software were used to plot a graph of r2 versus y2 where r is the measured radius at y, the distance from the corneal point being measured to the surface vertex. The intercept on the ordinate of this graph gives the vertex radius and the slope the p value. The results arising from the Talysurf and the EyeSys techniques were compared. RESULTS--The EyeSys videokeratoscope gave readings for both vertex radius and p value that were higher than those of the Talysurf analysis. The vertex radius was around 0.1 mm greater. The p value results were similar by the two methods for p values around unity but the EyeSys results were higher and the discrepancy increased as the p value approached that of a paraboloid. CONCLUSIONS--Although the videokeratoscope may be useful in comparative studies of the cornea, there must be some doubt about the absolute values displayed. The disagreement is sufficiently large to suggest that the instrument may not be accurate enough for contact lens fitting purposes. PMID:7488595

  10. Light calibration and quality assessment methods for Reflectance Transformation Imaging applied to artworks' analysis

    NASA Astrophysics Data System (ADS)

    Giachetti, A.; Daffara, C.; Reghelin, C.; Gobbetti, E.; Pintus, R.

    2015-06-01

    In this paper we analyze some problems related to the acquisition of multiple illumination images for Polynomial Texture Maps (PTM) or generic Reflectance Transform Imaging (RTI). We show that intensity and directionality nonuniformity can be a relevant issue when acquiring manual sets of images with the standard highlight-based setup both using a flash lamp and a LED light. To maintain a cheap and flexible acquisition setup that can be used on field and by non-experienced users we propose to use a dynamic calibration and correction of the lights based on multiple intensity and direction estimation around the imaged object during the acquisition. Preliminary tests on the results obtained have been performed by acquiring a specifically designed 3D printed pattern in order to see the accuracy of the acquisition obtained both for spatial discrimination of small structures and normal estimation, and on samples of different types of paper in order to evaluate material discrimination. We plan to design and build from our analysis and from the tools developed and under development a set of novel procedures and guidelines that can be used to turn the cheap and common RTI acquisition setup from a simple way to enrich object visualization into a powerful method for extracting quantitative characterization both of surface geometry and of reflective properties of different materials. These results could have relevant applications in the Cultural Heritage domain, in order to recognize different materials used in paintings or investigate the ageing status of artifacts' surface.

  11. Ultrasonic self-calibrated method applied to monitoring of sol-gel transition.

    PubMed

    Robin, Guillaume; Vander Meulen, François; Wilkie-Chancellier, Nicolas; Martinez, Loïc; Haumesser, Lionel; Fortineau, Jérôme; Griesmar, Pascal; Lethiecq, Marc; Feuillard, Guy

    2012-07-01

    In many industrial processes where online control is necessary such as in the food industry, the real time monitoring of visco-elastic properties is essential to ensure the quantity of production. Acoustic methods have shown that reliable properties could be obtained from measurements of velocity and attenuation. This paper proposes a simple, real time ultrasound method for monitoring linear medium properties (phase velocity and attenuation) that vary in time. The method is based on a pulse echo measurement and is self-calibrated. Results on a silica gel are reported and the importance of taking into account the changes of the mechanical loading on the front face of the transducer will be shown. This is done through a modification of the emission and reception transfer parameters. The simultaneous measurement of the input and output currents and voltages enables these parameters to be calculated during the reaction. The variations of the transfer parameters are in the order of 6% and predominate other effects. The evolution of the ultrasonic longitudinal wave phase velocity and attenuation as a function of time allows the characteristic times of the chemical reaction to be determined. The results are well correlated with the gelation time measured by rheological method at low frequency.

  12. Standard addition method applied to the urinary quantification of nicotine in the presence of cotinine and anabasine using surface enhanced Raman spectroscopy and multivariate curve resolution.

    PubMed

    Mamián-López, Mónica B; Poppi, Ronei J

    2013-01-14

    In this work, urinary nicotine was determined in the presence of the metabolite cotinine and the alkaloid anabasine using surface enhanced Raman spectroscopy and colloidal gold as substrate. Spectra were decomposed using the multivariate curve resolution-alternating least squares method, and pure contributions were recovered. The standard addition method was applied by spiking urine samples with known amounts of the analyte and relative responses from curve resolution were employed to build the analytical curves. The use of multivariate curve resolution in conjunction with standard addition method showed to be an effective strategy that minimized the need for reagent and time-consuming procedures. The determination of the alkaloid nicotine was successfully accomplished at concentrations 0.10, 0.20 and 0.30 μg mL(-1) and total error values less than 10% were obtained. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression

    USDA-ARS?s Scientific Manuscript database

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly ...

  14. Dispersive liquid-liquid microextraction of quinolones in porcine blood: Validation of a CE method using univariate calibration or multivariate curve resolution-alternating least squares for overlapped peaks.

    PubMed

    Teglia, Carla M; Cámara, María S; Vera-Candioti, Luciana

    2017-02-08

    In the previously published part of this study, we detailed a novel strategy based on dispersive liquid-liquid microextraction to extract and preconcentrate nine fluoroquinolones in porcine blood. Moreover, we presented the optimized experimental conditions to obtain complete CE separation between target analytes. Consequently, this second part reports the validation of the developed method to determine flumenique, difloxacin, enrofloxacin, marbofloxacin, ofloxacin, ciprofloxacin, through univariate calibration, and enoxacin, danofloxacin, and gatifloxacin through multivariate curve resolution analysis. The validation was performed according to FDA guidelines for bioanalytical assay procedures and the European Directive 2002/657 to demonstrate that the results are reliable. The method was applied for the determination of fluoroquinolones in real samples. Results indicated a high selectivity and excellent precision characteristics, with RSD less than 11.9% in the concentrations, in intra- and interassay precision studies. Linearity was proved for a range from 4.00 to 30.00 mg/L and the recovery has been investigated at four different fortification levels, from 89 to 113%. Several approaches found in the literature were used to determinate the LODs and LOQs. Though all strategies used were appropriate, we obtained different values when using different methods. Estimating the S/N ratio with the mean noise level in the migration time of each fluoroquinolones turned out as the best studied method for evaluating the LODs and LOQs, and the values were in a range of 1.55 to 4.55 mg/L and 5.17 to 9.62 mg/L, respectively.

  15. Evaluating treatment effect within a multivariate stochastic ordering framework: Nonparametric combination methodology applied to a study on multiple sclerosis.

    PubMed

    Brombin, Chiara; Di Serio, Clelia

    2016-02-01

    Multiple sclerosis is an autoimmune complex disease that affects the central nervous system. It has a multitude of symptoms that are observed in different people in many different ways. At this time, there is no definite cure for multiple sclerosis. However, therapies that slow the progression of disability, controlling symptoms and helping patients to maintain a normal quality of life, are available. We will focus on relapsing-remitting multiple sclerosis patients treated with interferons or glatiramer acetate. These treatments have been shown to be effective, but their relative effectiveness has not been well established yet. To assess the superiority of a treatment, instead of classical parametric methods, we propose a statistical approach within the permutation setting and the nonparametric combination of dependent permutation tests. In this framework, we may easily handle with hypothesis testing problems for multivariate monotonic stochastic ordering. This approach has been motivated by the analysis of a large observational Italian multicentre study on multiple sclerosis, with several continuous and categorical outcomes measured at multiple time points.

  16. Improving and Applying the Measurement of Erodibility: Examining and Calibrating Rock Mass Indices

    NASA Astrophysics Data System (ADS)

    Rodriguez, R. S.; Spotila, J. A.

    2011-12-01

    The Rock Mass Strength index (Selby, 1980) has become a standard test in geomorphology to quantify rock erodibility. Yet, the index combines a mixture of quantitative and qualitative parameters, yielding classification disparities arising from subjective user interpretations and producing final ratings that are effectively only comparable within a single researcher's dataset. Other methods, such as the Rock Quality Designation (Deere and Deere, 1988) and the Slope Mass Rating system (Bieniawski, 1989; Romana, 1995) employ some additional quantitative methods, but do not eliminate variability in user interpretation. Still, the idea of quantifying erodibility in an easily-applied field method holds great potential for furthering the understanding of large-scale landscape evolution. Therefore, we are applying several published and unpublished erodibility indices across a suite of rock types, varying the relative weights of index parameters and calculating ratings based on various potential interpretations of the index guidelines. To evaluate these results, we regress the iterations against the mean topographic slopes, allowing us to determine which index and weighting scheme is ideal overall. Results thus far have shown discrepancies between rating and slope in rocks that are more susceptible to chemical weathering (a parameter not typically included in erodibility indices). We are therefore examining the addition of chemical composition as an index parameter, or the possibility of creating weighting schema tailored to specific rock types and erosional environments. Preliminary results also suggest that beyond a threshold fracture density, high compressive rock strength is rendered moot, requiring further modification to existing indices.

  17. Applying multivariate statistics to discriminate uranium ore concentrate geolocations using (radio)chemical data in support of nuclear forensic investigations.

    PubMed

    Reading, David G; Croudace, Ian W; Warwick, Phillip E; Cigliana, Kassie A

    2016-10-01

    The application of Principal Components Analysis (PCA) to U and Th series gamma spectrometry data provides a discriminatory tool to help determine the provenance of illicitly recovered uranium ore concentrates (UOCs). The PCA is applied to a database of radiometric signatures from 19 historic UOCs from Australia, Canada, and the USA representing many uranium geological deposits. In this study a key process to obtain accurate radiometric data (gamma and alpha) is to digest the U-ores and UOCs using a lithium tetraborate fusion. Six UOCs from the same sample set were analysed 'blind' and compared against the database to identify their geolocation. These UOCs were all accurately linked to their correct geolocations which can aid the forensic laboratory in determining which further analytical techniques should be used to improve the confidence of the particular location. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Performance Analysis of Extracted Rule-Base Multivariable Type-2 Self-Organizing Fuzzy Logic Controller Applied to Anesthesia

    PubMed Central

    Fan, Shou-Zen; Shieh, Jiann-Shing

    2014-01-01

    We compare type-1 and type-2 self-organizing fuzzy logic controller (SOFLC) using expert initialized and pretrained extracted rule-bases applied to automatic control of anaesthesia during surgery. We perform experimental simulations using a nonfixed patient model and signal noise to account for environmental and patient drug interaction uncertainties. The simulations evaluate the performance of the SOFLCs in their ability to control anesthetic delivery rates for maintaining desired physiological set points for muscle relaxation and blood pressure during a multistage surgical procedure. The performances of the SOFLCs are evaluated by measuring the steady state errors and control stabilities which indicate the accuracy and precision of control task. Two sets of comparisons based on using expert derived and extracted rule-bases are implemented as Wilcoxon signed-rank tests. Results indicate that type-2 SOFLCs outperform type-1 SOFLC while handling the various sources of uncertainties. SOFLCs using the extracted rules are also shown to outperform those using expert derived rules in terms of improved control stability. PMID:25587533

  19. Multivariate optimization of molecularly imprinted polymer solid-phase extraction applied to parathion determination in different water samples.

    PubMed

    Alizadeh, Taher; Ganjali, Mohammad Reza; Nourozi, Parviz; Zare, Mashaalah

    2009-04-13

    In this work a parathion selective molecularly imprinted polymer was synthesized and applied as a high selective adsorber material for parathion extraction and determination in aqueous samples. The method was based on the sorption of parathion in the MIP according to simple batch procedure, followed by desorption by using methanol and measurement with square wave voltammetry. Plackett-Burman and Box-Behnken designs were used for optimizing the solid-phase extraction, in order to enhance the recovery percent and improve the pre-concentration factor. By using the screening design, the effect of six various factors on the extraction recovery was investigated. These factors were: pH, stirring rate (rpm), sample volume (V(1)), eluent volume (V(2)), organic solvent content of the sample (org%) and extraction time (t). The response surface design was carried out considering three main factors of (V(2)), (V(1)) and (org%) which were found to be main effects. The mathematical model for the recovery percent was obtained as a function of the mentioned main effects. Finally the main effects were adjusted according to the defined desirability function. It was found that the recovery percents more than 95% could be easily obtained by using the optimized method. By using the experimental conditions, obtained in the optimization step, the method allowed parathion selective determination in the linear dynamic range of 0.20-467.4 microg L(-1), with detection limit of 49.0 ng L(-1) and R.S.D. of 5.7% (n=5). Parathion content of water samples were successfully analyzed when evaluating potentialities of the developed procedure.

  20. Confocal Raman microscopy and multivariate statistical analysis for determination of different penetration abilities of caffeine and propylene glycol applied simultaneously in a mixture on porcine skin ex vivo.

    PubMed

    Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E

    2016-07-01

    Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin.

  1. Assessment of Coastal and Urban Flooding Hazards Applying Extreme Value Analysis and Multivariate Statistical Techniques: A Case Study in Elwood, Australia

    NASA Astrophysics Data System (ADS)

    Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik

    2016-04-01

    Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.

  2. Content uniformity determination of pharmaceutical tablets using five near-infrared reflectance spectrometers: a process analytical technology (PAT) approach using robust multivariate calibration transfer algorithms.

    PubMed

    Sulub, Yusuf; LoBrutto, Rosario; Vivilecchia, Richard; Wabuyele, Busolo Wa

    2008-03-24

    Near-infrared calibration models were developed for the determination of content uniformity of pharmaceutical tablets containing 29.4% drug load for two dosage strengths (X and Y). Both dosage strengths have a circular geometry and the only difference is the size and weight. Strength X samples weigh approximately 425 mg with a diameter of 12 mm while strength Y samples, weigh approximately 1700 mg with a diameter of 20mm. Data used in this study were acquired from five NIR instruments manufactured by two different vendors. One of these spectrometers is a dispersive-based NIR system while the other four were Fourier transform (FT) based. The transferability of the optimized partial least-squares (PLS) calibration models developed on the primary instrument (A) located in a research facility was evaluated using spectral data acquired from secondary instruments B, C, D and E. Instruments B and E were located in the same research facility as spectrometer A while instruments C and D were located in a production facility 35 miles away. The same set of tablet samples were used to acquire spectral data from all instruments. This scenario mimics the conventional pharmaceutical technology transfer from research and development to production. Direct cross-instrument prediction without standardization was performed between the primary and each secondary instrument to evaluate the robustness of the primary instrument calibration model. For the strength Y samples, this approach was successful for data acquired on instruments B, C, and D producing root mean square error of prediction (RMSEP) of 1.05, 1.05, and 1.22%, respectively. However for instrument E data, this approach was not successful producing an RMSEP value of 3.40%. A similar deterioration was observed for the strength X samples, with RMSEP values of 2.78, 5.54, 3.40, and 5.78% corresponding to spectral data acquired on instruments B, C, D, and E, respectively. To minimize the effect of instrument variability

  3. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-01

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.

  4. Development and analytical validation of a simple multivariate calibration method using digital scanner images for sunset yellow determination in soft beverages.

    PubMed

    Botelho, Bruno G; de Assis, Luciana P; Sena, Marcelo M

    2014-09-15

    This paper proposed a novel methodology for the quantification of an artificial dye, sunset yellow (SY), in soft beverages, using image analysis (RGB histograms) and partial least squares regression. The developed method presented many advantages if compared with alternative methodologies, such as HPLC and UV/VIS spectrophotometry. It was faster, did not require sample pretreatment steps or any kind of solvents and reagents, and used a low cost equipment, a commercial flatbed scanner. This method was able to quantify SY in isotonic drinks and orange sodas, in the range of 7.8-39.7 mg L(-1), with relative prediction errors lower than 10%. A multivariate validation was also performed according to the Brazilian and international guidelines. Linearity, accuracy, sensitivity, bias, prediction uncertainty and a recently proposed tool, the β-expectation tolerance intervals, were estimated. The application of digital images in food analysis is very promising, opening the possibility for automation.

  5. Improving the sampling strategy of the Joint Danube Survey 3 (2013) by means of multivariate statistical techniques applied on selected physico-chemical and biological data.

    PubMed

    Hamchevici, Carmen; Udrea, Ion

    2013-11-01

    The concept of basin-wide Joint Danube Survey (JDS) was launched by the International Commission for the Protection of the Danube River (ICPDR) as a tool for investigative monitoring under the Water Framework Directive (WFD), with a frequency of 6 years. The first JDS was carried out in 2001 and its success in providing key information for characterisation of the Danube River Basin District as required by WFD lead to the organisation of the second JDS in 2007, which was the world's biggest river research expedition in that year. The present paper presents an approach for improving the survey strategy for the next planned survey JDS3 (2013) by means of several multivariate statistical techniques. In order to design the optimum structure in terms of parameters and sampling sites, principal component analysis (PCA), factor analysis (FA) and cluster analysis were applied on JDS2 data for 13 selected physico-chemical and one biological element measured in 78 sampling sites located on the main course of the Danube. Results from PCA/FA showed that most of the dataset variance (above 75%) was explained by five varifactors loaded with 8 out of 14 variables: physical (transparency and total suspended solids), relevant nutrients (N-nitrates and P-orthophosphates), feedback effects of primary production (pH, alkalinity and dissolved oxygen) and algal biomass. Taking into account the representation of the factor scores given by FA versus sampling sites and the major groups generated by the clustering procedure, the spatial network of the next survey could be carefully tailored, leading to a decreasing of sampling sites by more than 30%. The approach of target oriented sampling strategy based on the selected multivariate statistics can provide a strong reduction in dimensionality of the original data and corresponding costs as well, without any loss of information.

  6. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  7. Calibration laws based on multiple linear regression applied to matrix-assisted laser desorption/ionization Fourier transform ion cyclotron resonance mass spectrometry.

    PubMed

    Williams, D Keith; Chadwick, M Ashley; Williams, Taufika Islam; Muddiman, David C

    2008-12-01

    Operation of any mass spectrometer requires implementation of mass calibration laws to translate experimentally measured physical quantities into a m/z range. While internal calibration in Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR-MS) offers several attractive features, including exposure of calibrant and analyte ions to identical experimental conditions (e.g. space charge), external calibration affords simpler pulse sequences and higher throughput. The automatic gain control method used in hybrid linear trap quadrupole (LTQ) FT-ICR-MS to consistently obtain the same ion population is not readily amenable to matrix-assisted laser desorption/ionization (MALDI) FT-ICR-MS, due to the heterogeneous nature and poor spot-to-spot reproducibility of MALDI. This can be compensated for by taking external calibration laws into account that consider magnetic and electric fields, as well as relative and total ion abundances. Herein, an evaluation of external mass calibration laws applied to MALDI-FT-ICR-MS is performed to achieve higher mass measurement accuracy (MMA).

  8. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  9. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  10. Quantitative analysis of binary polymorphs mixtures of fusidic acid by diffuse reflectance FTIR spectroscopy, diffuse reflectance FT-NIR spectroscopy, Raman spectroscopy and multivariate calibration.

    PubMed

    Guo, Canyong; Luo, Xuefang; Zhou, Xiaohua; Shi, Beijia; Wang, Juanjuan; Zhao, Jinqi; Zhang, Xiaoxia

    2017-03-10

    Vibrational spectroscopic techniques such as infrared, near-infrared and Raman spectroscopy have become popular in detecting and quantifying polymorphism of pharmaceutics since they are fast and non-destructive. This study assessed the ability of three vibrational spectroscopy combined with multivariate analysis to quantify a low-content undesired polymorph within a binary polymorphic mixture. Partial least squares (PLS) regression and support vector machine (SVM) regression were employed to build quantitative models. Fusidic acid, a steroidal antibiotic, was used as the model compound. It was found that PLS regression performed slightly better than SVM regression in all the three spectroscopic techniques. Root mean square errors of prediction (RMSEP) were ranging from 0.48% to 1.17% for diffuse reflectance FTIR spectroscopy and 1.60-1.93% for diffuse reflectance FT-NIR spectroscopy and 1.62-2.31% for Raman spectroscopy. The results indicate that diffuse reflectance FTIR spectroscopy offers significant advantages in providing accurate measurement of polymorphic content in the fusidic acid binary mixtures, while Raman spectroscopy is the least accurate technique for quantitative analysis of polymorphs.

  11. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks.

    PubMed

    Valletta, Elisa; Kučera, Lukáš; Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general.

  12. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks

    PubMed Central

    Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236

  13. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  14. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  15. Multivariate statistical monitoring as applied to clean-in-place (CIP) and steam-in-place (SIP) operations in biopharmaceutical manufacturing.

    PubMed

    Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir

    2014-01-01

    Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations.

  16. Multivariate optimization by exploratory analysis applied to the determination of microelements in fruit juice by inductively coupled plasma optical emission spectrometry

    NASA Astrophysics Data System (ADS)

    Froes, Roberta Eliane Santos; Neto, Waldomiro Borges; Silva, Nilton Oliveira Couto e.; Naveira, Rita Lopes Pereira; Nascentes, Clésia Cristina; da Silva, José Bento Borba

    2009-06-01

    A method for the direct determination (without sample pre-digestion) of microelements in fruit juice by inductively coupled plasma optical emission spectrometry has been developed. The method has been optimized by a 2 3 factorial design, which evaluated the plasma conditions (nebulization gas flow rate, applied power, and sample flow rate). A 1:1 diluted juice sample with 2% HNO 3 (Tetra Packed, peach flavor) and spiked with 0.5 mg L - 1 of Al, Ba, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Sn, and Zn was employed in the optimization. The results of the factorial design were evaluated by exploratory analysis (Hierarchical Cluster Analysis, HCA, and Principal Component Analysis, PCA) to determine the optimum analytical conditions for all elements. Central point condition differentiation (0.75 L min - 1 , 1.3 kW, and 1.25 mL min - 1 ) was observed for both methods, Principal Component Analysis and Hierarchical Cluster Analysis, with higher analytical signal values, suggesting that these are the optimal analytical conditions. F and t-student tests were used to compare the slopes of the calibration curves for aqueous and matrix-matched standards. No significant differences were observed at 95% confidence level. The correlation coefficient was higher than 0.99 for all the elements evaluated. The limits of quantification were: Al 253, Cu 3.6, Fe 84, Mn 0.4, Zn 71, Ni 67, Cd 69, Pb 129, Sn 206, Cr 79, Co 24, and Ba 2.1 µg L - 1 . The spiking experiments with fruit juice samples resulted in recoveries between 80 and 120%, except for Co and Sn. Al, Cd, Pb, Sn and Cr could not be quantified in any of the samples investigated. The method was applied to the determination of several elements in fruit juice samples commercialized in Brazil.

  17. Full spectrum and selected spectrum based multivariate calibration methods for simultaneous determination of betamethasone dipropionate, clotrimazole and benzyl alcohol: Development, validation and application on commercial dosage form

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Elzanfaly, Eman S.; Saad, Ahmed S.; Abdelaleem, Abdelaziz El-Bayoumi

    2016-12-01

    Five different chemometric methods were developed for the simultaneous determination of betamethasone dipropionate (BMD), clotrimazole (CT) and benzyl alcohol (BA) in their combined dosage form (Lotriderm® cream). The applied methods included three full spectrum based chemometric techniques; namely principal component regression (PCR), Partial Least Squares (PLS) and Artificial Neural Networks (ANN), while the other two methods were PLS and ANN preceded by genetic algorithm procedure (GA-PLS and GA-ANN) as a wavelength selection procedure. A multilevel multifactor experimental design was adopted for proper construction of the models. A validation set composed of 12 mixtures containing different ratios of the three analytes was used to evaluate the predictive power of the suggested models. All the proposed methods except ANN, were successfully applied for the analysis of their pharmaceutical formulation (Lotriderm® cream). Results demonstrated the efficiency of the four methods as quantitative tool for analysis of the three analytes without prior separation procedures and without any interference from the co-formulated excipient. Additionally, the work highlighted the effect of GA on increasing the predictive power of PLS and ANN models.

  18. Full spectrum and selected spectrum based multivariate calibration methods for simultaneous determination of betamethasone dipropionate, clotrimazole and benzyl alcohol: Development, validation and application on commercial dosage form.

    PubMed

    Darwish, Hany W; Elzanfaly, Eman S; Saad, Ahmed S; Abdelaleem, Abdelaziz El-Bayoumi

    2016-12-05

    Five different chemometric methods were developed for the simultaneous determination of betamethasone dipropionate (BMD), clotrimazole (CT) and benzyl alcohol (BA) in their combined dosage form (Lotriderm® cream). The applied methods included three full spectrum based chemometric techniques; namely principal component regression (PCR), Partial Least Squares (PLS) and Artificial Neural Networks (ANN), while the other two methods were PLS and ANN preceded by genetic algorithm procedure (GA-PLS and GA-ANN) as a wavelength selection procedure. A multilevel multifactor experimental design was adopted for proper construction of the models. A validation set composed of 12 mixtures containing different ratios of the three analytes was used to evaluate the predictive power of the suggested models. All the proposed methods except ANN, were successfully applied for the analysis of their pharmaceutical formulation (Lotriderm® cream). Results demonstrated the efficiency of the four methods as quantitative tool for analysis of the three analytes without prior separation procedures and without any interference from the co-formulated excipient. Additionally, the work highlighted the effect of GA on increasing the predictive power of PLS and ANN models. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. I - Theoretical considerations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The hole-drilling technique for the measurement of residual stresses using electrical resistance strain gages has been widely used for isotropic materials and has been adopted by the ASTM as a standard method. For thin isotropic plates, with a hole drilled through the thickness, the idealized hole-drilling calibration constants are obtained by making use of the well-known Kirsch's solution. In this paper, an analogous attempt is made to theoretically determine the three idealized hole-drilling calibration constants for thin orthotropic materials by employing Savin's (1961) complex stress function approach.

  20. The fractal calibration method applied to the characterization of polymers in solvent mixtures and in mixed gel packings by SEC.

    PubMed

    Porcar, Iolanda; García-Lopera, Rosa; Abad, Concepción; Campos, Agustín

    2007-08-01

    The size-exclusion chromatographic (SEC) behaviour of different solvent/polymer systems in three packing sets has been analysed from fractal considerations. The three-column sets studied are specifically formed by: (i) 'pure' micro-styragel, (ii) 'mixed' TSK Gel H(HR + XL + HR) and (iii) mixed TSK Gel H(XL + HR + XL). The experimental data reveals that in most of the systems assayed the classical universal calibration (UC) is not fulfilled, denoting the existence of secondary effects accompanying the main SEC mechanism. In order to obtain an accurate characterization of different polymers eluted in solvent mixtures and/or mixed packings, the use of a reliable and trusted calibration curve is required. In this sense, two alternative procedures have been analysed: the specific (SC) and the fractal (FC) calibrations. The results have evidenced that the use of the FC instead of the classical universal method diminishes up to nine times (in the case of the micro-styragel set) the mean deviation on the calculated molar mass with respect to the value given by the supplier. In the case of TSK Gel-based sets, the mean deviation is reduced to the half. The SC curve made with standards of the sample under study also reduces the mean deviation values but needs a broad set of narrow standards, whereas the fractal approach only needs one polymeric sample to build up the calibration curve.

  1. Combination of GC/FID/Mass spectrometry fingerprints and multivariate calibration techniques for recognition of antimicrobial constituents of Myrtus communis L. essential oil.

    PubMed

    Ebrahimabadi, Ebrahim H; Ghoreishi, Sayed Mehdi; Masoum, Saeed; Ebrahimabadi, Abdolrasoul H

    2016-01-01

    Myrtus communis L. is an aromatic evergreen shrub and its essential oil possesses known powerful antimicrobial activity. However, the contribution of each component of the plant essential oil in observed antimicrobial ability is unclear. In this study, chemical components of the essential oil samples of the plant were identified qualitatively and quantitatively using GC/FID/Mass spectrometry system, antimicrobial activity of these samples against three microbial strains were evaluated and, these two set of data were correlated using chemometrics methods. Three chemometric methods including principal component regression (PCR), partial least squares (PLS) and orthogonal projections to latent structures (OPLS) were applied for the study. These methods showed similar results, but, OPLS was selected as preferred method due to its predictive and interpretational ability, facility, repeatability and low time-consuming. The results showed that α-pinene, 1,8 cineole, β-pinene and limonene are the highest contributors in antimicrobial properties of M. communis essential oil. Other researches have reported high antimicrobial activities for the plant essential oils rich in these compounds confirming our findings.

  2. Qualitative and simultaneous quantitative analysis of cimetidine polymorphs by ultraviolet-visible and shortwave near-infrared diffuse reflectance spectroscopy and multivariate calibration models.

    PubMed

    Feng, Yuyan; Li, Xiangling; Xu, Kailin; Zou, Huayu; Li, Hui; Liang, Bing

    2015-02-01

    The object of the present study was to investigate the feasibility of applying ultraviolet-visible and shortwave near-infrared diffuse reflectance spectroscopy (UV-vis-SWNIR DRS) coupled with chemometrics in qualitative and simultaneous quantitative analysis of drug polymorphs, using cimetidine as a model drug. Three polymorphic forms (A, B and D) and a mixed crystal (M1) of cimetidine, obtained by preparation under different crystallization conditions, were characterized by microscopy, X-ray powder diffraction (XRPD) and infrared spectroscopy (IR). The discriminant models of four forms (A, B, D and M1) were established by discriminant partial least squares (PLS-DA) using different pretreated spectra. The R and RMSEP of samples in the prediction set by discriminant model with original spectra were 0.9959 and 0.1004. Among the quantitative models of binary mixtures (A and D) established by partial least squares (PLS) and least squares-support vector machine (LS-SVM) with different pretreated spectra, the LS-SVM models based on original and MSC spectra had better prediction effect with a R of 1.0000 and a RMSEP of 0.0134 for form A, and a R of 1.0000 and a RMSEP of 0.0024 for form D. For ternary mixtures, the established PLS quantitative models based on normalized spectra had relatively better prediction effect for forms A, B and D with R of 0.9901, 0.9820 and 0.9794 and RMSEP of 0.0471, 0.0529 and 0.0594, respectively. This research indicated that UV-vis-SWNIR DRS can be used as a simple, rapid, nondestructive qualitative and quantitative method for the analysis of drug polymorphs.

  3. Simultaneous determination of aspartame, cyclamate, saccharin and acesulfame-K in powder tabletop sweeteners by FT-Raman spectroscopy associated with the multivariate calibration: PLS, iPLS and siPLS models were compared.

    PubMed

    Duarte, Lucas M; Paschoal, Diego; Izumi, Celly M S; Dolzan, Maressa D; Alves, Victor R; Micke, Gustavo A; Dos Santos, Hélio F; de Oliveira, Marcone A L

    2017-09-01

    For the first time, a procedure for simultaneous determination of the main artificial sweeteners, aspartame (ASP), cyclamate (CYC), saccharin (SAC), and acesulfame-K (ACSK) by a spectroscopic method associated with the multivariate calibration is proposed. These analytes were quantified in tabletop sweeteners samples using FT-Raman spectroscopy. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) was used as reference method. Partial least squares (PLS), interval PLS (iPLS), and synergism PLS (siPLS) methods were evaluated in a comparative study where the selected interval models presented better results. Multivariate regression models, such as PLS, iPLS and siPLS were built and the lower root mean square errors for prediction (RMSEP) found were 0.027-0.031% w/w, 0.316-0.363% w/w, 0.082-0.184% w/w, and 0.040-0.049% w/w to ASP, CYC, SAC, and ACSK, respectively. The coefficient of determination for prediction (R(2)p) varied between 0.978 and 0.979, 0.969-0.977, 0.952-0.994, and 0.959-0.965 for ASP, CYC, SAC and ACSK, respectively. The analysis of model's residues was made by bias and permutation tests to evaluate systematic and trend errors. The selected intervals by iPLS and siPLS were evaluated and the bands related to the vibrational modes of the analytes were assigned with the aid of density functional theory calculations (DFT). Copyright © 2017. Published by Elsevier Ltd.

  4. A flow-batch analyzer with piston propulsion applied to automatic preparation of calibration solutions for Mn determination in mineral waters by ET AAS.

    PubMed

    Almeida, Luciano F; Vale, Maria G R; Dessuy, Morgana B; Silva, Márcia M; Lima, Renato S; Santos, Vagner B; Diniz, Paulo H D; Araújo, Mário C U

    2007-10-31

    The increasing development of miniaturized flow systems and the continuous monitoring of chemical processes require dramatically simplified and cheap flow schemes and instrumentation with large potential for miniaturization and consequent portability. For these purposes, the development of systems based on flow and batch technologies may be a good alternative. Flow-batch analyzers (FBA) have been successfully applied to implement analytical procedures, such as: titrations, sample pre-treatment, analyte addition and screening analysis. In spite of its favourable characteristics, the previously proposed FBA uses peristaltic pumps to propel the fluids and this kind of propulsion presents high cost and large dimension, making unfeasible its miniaturization and portability. To overcome these drawbacks, a low cost, robust, compact and non-propelled by peristaltic pump FBA is proposed. It makes use of a lab-made piston coupled to a mixing chamber and a step motor controlled by a microcomputer. The piston-propelled FBA (PFBA) was applied for automatic preparation of calibration solutions for manganese determination in mineral waters by electrothermal atomic-absorption spectrometry (ET AAS). Comparing the results obtained with two sets of calibration curves (five by manual and five by PFBA preparations), no significant statistical differences at a 95% confidence level were observed by applying the paired t-test. The standard deviation of manual and PFBA procedures were always smaller than 0.2 and 0.1mugL(-1), respectively. By using PFBA it was possible to prepare about 80 calibration solutions per hour.

  5. Four-way calibration applied to the simultaneous determination of folic acid and methotrexate in urine samples.

    PubMed

    Muñoz de la Peña, A; Durán Merás, I; Jiménez Girón, A

    2006-08-01

    First-, second- and third-order calibration methods were investigated for the simultaneous determination of folic acid and methotrexate. The interest in the determination of these compounds is related to the fact that methotrexate inhibits the body's absorption of folic acid and prolonged treatment with methotrexate may lead to folic acid deficiency, and to the use of folic acid to cope with toxic side effects of methotrexate. Both analytes were converted into highly fluorescent compounds by oxidation with potassium permanganate, and the kinetics of the reaction was continuously monitored by recording the kinetics curves of fluorescence emission, the evolution with time of the emission spectra and the excitation-emission matrices (EEMs) of the samples at different reaction times. Direct determination of mixtures of both drugs in urine was accomplished on the basis of the evolution of the kinetics of EEMs by fluorescence measurements and four-way parallel-factor analysis (PARAFAC) or multiway partial least squares (N-PLS) chemometric calibration. The core consistency diagnostic (CORCONDIA) was employed to determine the correct number of factors in PARAFAC and the procedure converged to a choice of three factors, attributed to folic acid, methotrexate and to the sum of fluorescent species present in the urine.

  6. Calibration maintenance and transfer using Tikhonov regularization approaches.

    PubMed

    Kalivas, John H; Siano, Gabriel G; Andries, Erik; Goicoechea, Hector C

    2009-07-01

    Maintaining multivariate calibrations is essential and involves keeping models developed on an instrument applicable to predicting new samples over time. Sometimes a primary instrument model is needed to predict samples measured on secondary instruments. This situation is referred to as calibration transfer. This paper reports on using a Tikhonov regularization (TR) based method in both cases. A distinction of the TR design for calibration maintenance and transfer is a defined weighting scheme for a small set of new (transfer or standardization) samples augmented to the full set of calibration samples. Because straight application of basic TR theory is not always possible with calibration maintenance and transfer, this paper develops a generic solution to always enable application of TR. Harmonious (bias/variance tradeoff) and parsimonious (effective rank) considerations for TR are compared with the same TR format applied to partial least squares (PLS), showing that both approaches are viable solutions to the calibration maintenance and transfer problems.

  7. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  8. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  9. Standardisation of elemental analytical techniques applied to provenance studies of archaeological ceramics: an inter laboratory calibration study.

    PubMed

    Hein, A; Tsolakidou, A; Iliopoulos, I; Mommsen, H; Buxeda i Garrigós, J; Montana, G; Kilikoglou, V

    2002-04-01

    Chemical analysis is a well-established procedure for the provenancing of archaeological ceramics. Various analytical techniques are routinely used and large amounts of data have been accumulated so far in data banks. However, in order to exchange results obtained by different laboratories, the respective analytical procedures need to be tested in terms of their inter-comparability. In this study, the schemes of analysis used in four laboratories that are involved in archaeological pottery studies on a routine basis were compared. The techniques investigated were neutron activation analysis (NAA), X-ray fluorescence analysis (XRF), inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS). For this comparison series of measurements on different geological standard reference materials (SRM) were carried out and the results were statistically evaluated. An attempt was also made towards the establishment of calibration factors between pairs of analytical setups in order to smooth the systematic differences among the results.

  10. Langley method applied in study of aerosol optical depth in the Brazilian semiarid region using 500, 670 and 870 nm bands for sun photometer calibration

    NASA Astrophysics Data System (ADS)

    Cerqueira, J. G.; Fernandez, J. H.; Hoelzemann, J. J.; Leme, N. M. P.; Sousa, C. T.

    2014-10-01

    Due to the high costs of commercial monitoring instruments, a portable sun photometer was developed at INPE/CRN laboratories, operating in four bands, with two bands in the visible spectrum and two in near infrared. The instrument calibration process is performed by applying the classical Langley method. Application of the Langley’s methodology requires a site with high optical stability during the measurements, which is usually found in high altitudes. However, far from being an ideal site, Harrison et al. (1994) report success with applying the Langley method to some data for a site in Boulder, Colorado. Recently, Liu et al. (2011) show that low elevation sites, far away from urban and industrial centers can provide a stable optical depth, similar to high altitudes. In this study we investigated the feasibility of applying the methodology in the semiarid region of northeastern Brazil, far away from pollution areas with low altitudes, for sun photometer calibration. We investigated optical depth stability using two periods of measurements in the year during dry season in austral summer. The first one was in December when the native vegetation naturally dries, losing all its leaves and the second one was in September in the middle of the dry season when the vegetation is still with leaves. The data were distributed during four days in December 2012 and four days in September 2013 totaling eleven half days of collections between mornings and afternoons and by means of fitted line to the data V0 values were found. Despite the high correlation between the collected data and the fitted line, the study showed a variation between the values of V0 greater than allowed for sun photometer calibration. The lowest V0 variation reached in this experiment with values lower than 3% for the bands 500, 670 and 870 nm are displayed in tables. The results indicate that the site needs to be better characterized with studies in more favorable periods, soon after the rainy season.

  11. Influence of prosthesis type and retention mechanism on complications with fixed implant-supported prostheses: a systematic review applying multivariate analyses.

    PubMed

    Millen, Christopher; Brägger, Urs; Wittneben, Julia-Gabriela

    2015-01-01

    To identify the influence of fixed prosthesis type on biologic and technical complication rates in the context of screw versus cement retention. Furthermore, a multivariate analysis was conducted to determine which factors, when considered together, influence the complication and failure rates of fixed implant-supported prostheses. Electronic searches of MEDLINE (PubMed), EMBASE, and the Cochrane Library were conducted. Selected inclusion and exclusion criteria were used to limit the search. Data were analyzed statistically with simple and multivariate random-effects Poisson regressions. Seventy-three articles qualified for inclusion in the study. Screw-retained prostheses showed a tendency toward and significantly more technical complications than cemented prostheses with single crowns and fixed partial prostheses, respectively. Resin chipping and ceramic veneer chipping had high mean event rates, at 10.04 and 8.95 per 100 years, respectively, for full-arch screwed prostheses. For "all fixed prostheses" (prosthesis type not reported or not known), significantly fewer biologic and technical complications were seen with screw retention. Multivariate analysis revealed a significantly greater incidence of technical complications with cemented prostheses. Full-arch prostheses, cantilevered prostheses, and "all fixed prostheses" had significantly higher complication rates than single crowns. A significantly greater incidence of technical and biologic complications was seen with cemented prostheses. Screw-retained fixed partial prostheses demonstrated a significantly higher rate of technical complications and screw-retained full-arch prostheses demonstrated a notably high rate of veneer chipping. When "all fixed prostheses" were considered, significantly higher rates of technical and biologic complications were seen for cement-retained prostheses. Multivariate Poisson regression analysis failed to show a significant difference between screw- and cement

  12. Use of KNN technique to improve the efficiency of SCE-UA optimisation method applied to the calibration of HBV Rainfall-Runoff model

    NASA Astrophysics Data System (ADS)

    Dakhlaoui, H.; Bargaoui, Z.

    2007-12-01

    The Calibration of Rainfall-Runoff models can be viewed as an optimisation problem involving an objective function that measures the model performance expressed as a distance between observed and calculated discharges. Effectiveness (ability to find the optimum) and efficiency (cost expressed in number of objective function evaluations to reach the optimum) are the main criteria of choose of the optimisation method. SCE-UA is known as one of the most effective and efficient optimisation method. In this work we tried to improve the SCE-UA efficiency, in the case of the calibration of HBV model by using KNN technique to estimate the objective function. In fact after a number of iterations by SCE-UA, when objective function is evaluated by model simulation, a data base of parameter explored and respective objective function values is constituted. Within this data base it is proposed to estimate the objective function in further iterations, by an interpolation using nearest neighbours in a normalised parameter space with weighted Euclidean distance. Weights are chosen proportional to the sensitivity of parameter to objective function that gives more importance to sensitive parameter. Evaluation of model output is done through the objective function RV=R2- w |RD| where R2 is Nash Sutcliffe coefficient related to discharges, w : a weight and RD the relative bias. Applied to theoretical and practical cases in several catchments under different climatic conditions : Rottweil (Germany) and Tessa, Barbra, and Sejnane (Tunisia), the hybrid SCE-UA presents efficiency better then that of initial SCE-UA by about 20 to 30 %. By using other techniques as parameter space transformation and SCE-UA modification (2), we may obtain an algorithm two to three times faster. (1) Avi Ostfeld, Shani Salomons, "A hybrid genetic-instance learning algorithm for CE*QAL-W2 calibration", Journal of Hydrology 310 (2005) 122-125 (2) Nitin Mutil and Shie-Yui Liong, "Improved robustness and Efficiency

  13. Calibration and Assessment of a Distributed Hydrologic Model Applied to a Glacierized Basin in the Cordillera Blanca, Peru

    NASA Astrophysics Data System (ADS)

    Burns, P. J.; Nolin, A. W.; Lettenmaier, D. P.; Clarke, G. K.; Naz, B. S.; Gleason, K. E.

    2011-12-01

    Glacier retreat has been well documented in the Cordillera Blanca of the Peruvian Andes. It is becoming clearer that changes in glacier area and volume will negatively affect water resources in this region, particularly during the dry season (May to September). Previous studies focusing on this issue in the Cordillera Blanca have had success modeling runoff but did so using somewhat over-simplified hydrologic models. The question driving this study is: How well does the Distributed Hydrology Soil and Vegetation Model (DHSVM) coupled with a new dynamic glacier sub-model replicate runoff in a test basin of the Cordillera Blanca, namely Llanganuco. During the 2011 dry season we collected data on stream discharge, meteorological conditions, soil, and vegetation in the basin. We installed two stage height recorders in the middle reaches of the watershed to complement a third which delineates the basin outlet. Flow data collected at these points will be used for model calibration and/or validation. For geochemical validation we collected spring and meltwater samples for use in a two component isotopic mixing model. We also mapped dominant soil and vegetation types for model input. We use satellite imagery (ASTER and Landsat) to map the change in glacier extent over approximately the last 30 years as this will be another model input. Coupled together, all of these data will be used to run, validate, and refine a model which will also be implemented in other regions of the world where glacier melt is crucial at certain times of the year.

  14. Precision limits and interval estimation in the calibration of 1-hydroxypyrene in urine and hexachlorbenzene in water, applying the regression triplet procedure on chromatographic data.

    PubMed

    Meloun, Milan; Dluhosová, Zdenka

    2008-04-01

    A method for the determination of 1-hydroxypyrene in urine and hexachlorbenzene in water applying the regression triplet in the calibration procedure of chromatographic data has been applied. The detection limit and quantification limit are currently calculated on the basis of the standard deviation of replicate analyses at a single concentration. However, since the standard deviation depends on concentration, these single-concentration techniques result in limits that are directly dependent on spiking concentration. A more rigorous approach requires first careful attention to the three components of the regression triplet (data, model, method), examining (1) the data quality of the proposed model, (2) the model quality and (3) the least-squares method to be used for fulfilment of all least-squares assumptions. For high-performance liquid chromatography determination of 1-hydroxypyrene in urine and gas chromatography analysis of hexachlorbenzene in water, this paper describes the effects of deviations from five basic assumptions The paper considers the correction of deviations: identifying influential points, namely, outliers, the calibration task depends on the regression model used, and the least-squares method is based on the assumptions of the normality of the errors, homoscedasticity and the independence of errors. Results show that the approach developed provides improved estimates of analytical limits and that the single-concentration approaches currently in wide use are seriously flawed.

  15. [Uncertainty of cross calibration-applied beam quality conversion factor for the Japan Society of Medical Physics 12].

    PubMed

    Kinoshita, Naoki; Kita, Akinobu; Takemura, Akihiro; Nishimoto, Yasuhiro; Adachi, Toshiki

    2014-09-01

    The uncertainty of the beam quality conversion factor (k(Q,Q0)) of standard dosimetry of absorbed dose to water in external beam radiotherapy 12 (JSMP12) is determined by combining the uncertainty of each beam quality conversion factor calculated for each type of ionization chamber. However, there is no guarantee that ionization chambers of the same type have the same structure and thickness, so there may be individual variations. We evaluated the uncertainty of k(Q,Q0) for JSMP12 using an ionization chamber dosimeter and linear accelerator without a specific device or technique in consideration of the individual variation of ionization chambers and in clinical radiation field. The cross calibration formula was modified and the beam quality conversion factor for the experimental values [(k(Q,Q0))field] determined using the modified formula. It's uncertainty was calculated to be 1.9%. The differences between (k(Q,Q0))field of experimental values and k(Q,Q0) for Japan Society of Medical Physics 12 (JSMP12) were 0.73% and 0.88% for 6- and 10-MV photon beams, respectively, remaining within ± 1.9%. This showed k(Q,Q0) for JSMP12 to be consistent with (k(Q,Q0))field of experimental values within the estimated uncertainty range. Although inter-individual differences may be generated, even when the same type of ionized chamber is used, k(Q,Q0) for JSMP12 appears to be consistent within the estimated uncertainty range of (k(Q,Q0)field.

  16. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  17. Calibrating Wide Field Surveys

    NASA Astrophysics Data System (ADS)

    González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.

    2017-09-01

    "In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."

  18. ORNL calibrations facility

    SciTech Connect

    Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.

    1982-08-01

    The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL.

  19. Multivariate postprocessing techniques for probabilistic hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2016-04-01

    Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power

  20. Improving self-calibration.

    PubMed

    Enßlin, Torsten A; Junklewitz, Henrik; Winderling, Lars; Greiner, Maksim; Selig, Marco

    2014-10-01

    Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

  1. Three-way multivariate curve resolution applied to speciation of acid-base and thermal unfolding transitions of an alternating polynucleotide.

    PubMed

    Vives, M; Gargallo, R; Tauler, R

    2001-12-01

    Analytical speciation of acid-base equilibria and thermal unfolding transitions of an alternating random polynucleotide containing cytosine and hypoxanthine, poly(C, I), is studied. The results are compared with those obtained previously for single-stranded polynucleotides, poly(I) and poly(C), and for the double-stranded poly(I). poly(C), to examine the influence of the secondary structure on the acid-base properties of bases. This study is based on monitoring acid-base titrations and thermal unfolding experiments by molecular absorption, CD, and molecular fluorescence spectroscopies. Experimental data were analyzed by a novel chemometric approach based on a recently developed three-way Multivariate Curve Resolution method, which allowed the simultaneous analysis of data from several spectroscopies. This procedure improves the resolution of the concentration profiles and pure spectra for the species and conformations present in folding-unfolding and acid-base equilibria. The results from acid-base studies showed the existence of only three species in the pH range 2-12 at 37 degrees C and 0.15M ionic strength. No cooperative effects were detected from the resolved concentration profiles, showing that equilibria concerning alternating polynucleotides like poly(C, I) are simpler than those involving poly(I). poly(C). Thermal unfolding experiments at neutral pH confirmed the existence of two transitions and one intermediate conformation. This intermediate conformation could only be detected and resolved without ambiguities when molecular absorption and CD spectral data were analyzed simultaneously. Copyright 2001 John Wiley & Sons, Inc. Biopolymers 59: 477-488, 2001

  2. Seasonal variation of benzo(a)pyrene in the Spanish airborne PM10. Multivariate linear regression model applied to estimate BaP concentrations.

    PubMed

    Callén, M S; López, J M; Mastral, A M

    2010-08-15

    The estimation of benzo(a)pyrene (BaP) concentrations in ambient air is very important from an environmental point of view especially with the introduction of the Directive 2004/107/EC and due to the carcinogenic character of this pollutant. A sampling campaign of particulate matter less or equal than 10 microns (PM10) carried out during 2008-2009 in four locations of Spain was collected to determine experimentally BaP concentrations by gas chromatography mass-spectrometry mass-spectrometry (GC-MS-MS). Multivariate linear regression models (MLRM) were used to predict BaP air concentrations in two sampling places, taking PM10 and meteorological variables as possible predictors. The model obtained with data from two sampling sites (all sites model) (R(2)=0.817, PRESS/SSY=0.183) included the significant variables like PM10, temperature, solar radiation and wind speed and was internally and externally validated. The first validation was performed by cross validation and the last one by BaP concentrations from previous campaigns carried out in Zaragoza from 2001-2004. The proposed model constitutes a first approximation to estimate BaP concentrations in urban atmospheres with very good internal prediction (Q(CV)(2)=0.813, PRESS/SSY=0.187) and with the maximal external prediction for the 2001-2002 campaign (Q(ext)(2)=0.679 and PRESS/SSY=0.321) versus the 2001-2004 campaign (Q(ext)(2)=0.551, PRESS/SSY=0.449).

  3. A new and consistent parameter for measuring the quality of multivariate analytical methods: Generalized analytical sensitivity.

    PubMed

    Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C

    2016-08-24

    Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Error estimates for ocean surface winds: Applying Desroziers diagnostics to the Cross-Calibrated, Multi-Platform analysis of wind speed

    NASA Astrophysics Data System (ADS)

    Hoffman, Ross N.; Ardizzone, Joseph V.; Leidner, S. Mark; Smith, Deborah K.; Atlas, Robert M.

    2013-04-01

    The cross-calibrated, multi-platform (CCMP) ocean surface wind project [Atlas et al., 2011] generates high-quality, high-resolution, vector winds over the world's oceans beginning with the 1987 launch of the SSM/I F08, using Remote Sensing Systems (RSS) microwave satellite wind retrievals, as well as in situ observations from ships and buoys. The variational analysis method [VAM, Hoffman et al., 2003] is at the center of the CCMP project's analysis procedures for combining observations of the wind. The VAM was developed as a smoothing spline and so implicitly defines the background error covariance by means of several constraints with adjustable weights, and does not provide an explicit estimate of the analysis error. Here we report on our research to develop uncertainty estimates for wind speed for the VAM inputs and outputs, i.e., for the background (B), the observations (O) and the analysis (A) wind speed, based on the Desroziers et al. [2005] diagnostics (DD hereafter). The DD are applied to the CCMP ocean surface wind data sets to estimate wind speed errors of the ECMWF background, the microwave satellite observations and the resulting CCMP analysis. The DD confirm that the ECMWF operational surface wind speed error standard deviations vary with latitude in the range 0.7-1.5 m/s and that the cross-calibrated Remote Sensing Systems (RSS) wind speed retrievals standard deviations are in the range 0.5-0.8 m/s. Further the estimated CCMP analysis wind speed standard deviations are in the range 0.2-0.4 m/s. The results suggests the need to revise the parameterization of the errors due to the FGAT (first guess at the appropriate time) procedure. Errors for wind speeds < 16 m/s are homogeneous, but for the relatively rare, but critical higher wind speed situations, errors are much larger. Atlas, R., R. N. Hoffman, J. Ardizzone, S. M. Leidner, J. C. Jusem, D. K. Smith, and D. Gombos, A cross-calibrated, multi-platform ocean surface wind velocity product for

  5. Applying diagnosis and pharmacy-based risk models to predict pharmacy use in Aragon, Spain: The impact of a local calibration

    PubMed Central

    2010-01-01

    Background In the financing of a national health system, where pharmaceutical spending is one of the main cost containment targets, predicting pharmacy costs for individuals and populations is essential for budget planning and care management. Although most efforts have focused on risk adjustment applying diagnostic data, the reliability of this information source has been questioned in the primary care setting. We sought to assess the usefulness of incorporating pharmacy data into claims-based predictive models (PMs). Developed primarily for the U.S. health care setting, a secondary objective was to evaluate the benefit of a local calibration in order to adapt the PMs to the Spanish health care system. Methods The population was drawn from patients within the primary care setting of Aragon, Spain (n = 84,152). Diagnostic, medication and prior cost data were used to develop PMs based on the Johns Hopkins ACG methodology. Model performance was assessed through r-squared statistics and predictive ratios. The capacity to identify future high-cost patients was examined through c-statistic, sensitivity and specificity parameters. Results The PMs based on pharmacy data had a higher capacity to predict future pharmacy expenses and to identify potential high-cost patients than the models based on diagnostic data alone and a capacity almost as high as that of the combined diagnosis-pharmacy-based PM. PMs provided considerably better predictions when calibrated to Spanish data. Conclusion Understandably, pharmacy spending is more predictable using pharmacy-based risk markers compared with diagnosis-based risk markers. Pharmacy-based PMs can assist plan administrators and medical directors in planning the health budget and identifying high-cost-risk patients amenable to care management programs. PMID:20092654

  6. Modular multivariable control improves hydrocracking

    SciTech Connect

    Chia, T.L.; Lefkowitz, I.; Tamas, P.D.

    1996-10-01

    Modular multivariable control (MMC), a system of interconnected, single process variable controllers, can be a user-friendly, reliable and cost-effective alternative to centralized, large-scale multivariable control packages. MMC properties and features derive directly from the properties of the coordinated controller which, in turn, is based on internal model control technology. MMC was applied to a hydrocracking unit involving two process variables and three controller outputs. The paper describes modular multivariable control, MMC properties, tuning considerations, application at the DCS level, constraints handling, and process application and results.

  7. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  8. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  9. Synthetic Multivariate Models to Accommodate Unmodeled Interfering Components During Quantitative Spectral Analyses

    SciTech Connect

    Haaland, David M.

    1999-07-14

    The analysis precision of any multivariate calibration method will be severely degraded if unmodeled sources of spectral variation are present in the unknown sample spectra. This paper describes a synthetic method for correcting for the errors generated by the presence of unmodeled components or other sources of unmodeled spectral variation. If the spectral shape of the unmodeled component can be obtained and mathematically added to the original calibration spectra, then a new synthetic multivariate calibration model can be generated to accommodate the presence of the unmodeled source of spectral variation. This new method is demonstrated for the presence of unmodeled temperature variations in the unknown sample spectra of dilute aqueous solutions of urea, creatinine, and NaCl. When constant-temperature PLS models are applied to spectra of samples of variable temperature, the standard errors of prediction (SEP) are approximately an order of magnitude higher than that of the original cross-validated SEPs of the constant-temperature partial least squares models. Synthetic models using the classical least squares estimates of temperature from pure water or variable-temperature mixture sample spectra reduce the errors significantly for the variable temperature samples. Spectrometer drift adds additional error to the analyte determinations, but a method is demonstrated that can minimize the effect of drift on prediction errors through the measurement of the spectra of a small subset of samples during both calibration and prediction. In addition, sample temperature can be predicted with high precision with this new synthetic model without the need to recalibrate using actual variable-temperature sample data. The synthetic methods eliminate the need for expensive generation of new calibration samples and collection of their spectra. The methods are quite general and can be applied using any known source of spectral variation and can be used with any multivariate

  10. RF impedance measurement calibration

    SciTech Connect

    Matthews, P.J.; Song, J.J.

    1993-02-12

    The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references.

  11. Raman Microspectroscopic Mapping with Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) Applied to the High-Pressure Polymorph of Titanium Dioxide, TiO2-II.

    PubMed

    Smith, Joseph P; Smith, Frank C; Ottaway, Joshua; Krull-Davatzes, Alexandra E; Simonson, Bruce M; Glass, Billy P; Booksh, Karl S

    2017-08-01

    The high-pressure, α-PbO2-structured polymorph of titanium dioxide (TiO2-II) was recently identified in micrometer-sized grains recovered from four Neoarchean spherule layers deposited between ∼2.65 and ∼2.54 billion years ago. Several lines of evidence support the interpretation that these layers represent distal impact ejecta layers. The presence of shock-induced TiO2-II provides physical evidence to further support an impact origin for these spherule layers. Detailed characterization of the distribution of TiO2-II in these grains may be useful for correlating the layers, estimating the paleodistances of the layers from their source craters, and providing insight into the formation of the TiO2-II. Here we report the investigation of TiO2-II-bearing grains from these four spherule layers using multivariate curve resolution-alternating least squares (MCR-ALS) applied to Raman microspectroscopic mapping. Raman spectra provide evidence of grains consisting primarily of rutile (TiO2) and TiO2-II, as shown by Raman bands at 174 cm(-1) (TiO2-II), 426 cm(-1) (TiO2-II), 443 cm(-1) (rutile), and 610 cm(-1) (rutile). Principal component analysis (PCA) yielded a predominantly three-phase system comprised of rutile, TiO2-II, and substrate-adhesive epoxy. Scanning electron microscopy (SEM) suggests heterogeneous grains containing polydispersed micrometer- and submicrometer-sized particles. Multivariate curve resolution-alternating least squares applied to the Raman microspectroscopic mapping yielded up to five distinct chemical components: three phases of TiO2 (rutile, TiO2-II, and anatase), quartz (SiO2), and substrate-adhesive epoxy. Spectral profiles and spatially resolved chemical maps of the pure chemical components were generated using MCR-ALS applied to the Raman microspectroscopic maps. The spatial resolution of the Raman microspectroscopic maps was enhanced in comparable, cost-effective analysis times by limiting spectral resolution and optimizing

  12. Simultaneous determination of propranolol and amiloride in synthetic binary mixtures and pharmaceutical dosage forms by synchronous fluorescence spectroscopy: a multivariate approach

    NASA Astrophysics Data System (ADS)

    Divya, O.; Shinde, Mandakini

    2013-07-01

    A multivariate calibration model for the simultaneous estimation of propranolol (PRO) and amiloride (AMI) using synchronous fluorescence spectroscopic data has been presented in this paper. Two multivariate techniques, PCR (Principal Component Regression) and PLSR (Partial Least Square Regression), have been successfully applied for the simultaneous determination of AMI and PRO in synthetic binary mixtures and pharmaceutical dosage forms. The SF spectra of AMI and PRO (calibration mixtures) were recorded at several concentrations within their linear range between wavelengths of 310 and 500 nm at an interval of 1 nm. Calibration models were constructed using 32 samples and validated by varying the concentrations of AMI and PRO in the calibration range. The results indicated that the model developed was very robust and able to efficiently analyze the mixtures with low RMSEP values.

  13. Validation and application of high-performance liquid chromatographic and spectrophotometric methods for simultaneous estimation of nebivolol and hydrochlorothiazide: a novel approach to multivariate calibrations by R-Software Environment.

    PubMed

    Gowda, Nagaraj; Panghal, Surender; Vipul, Kalamkar; Rajshree, Mashru

    2009-01-01

    A fast, simple reversed-phase HPLC method and two spectrophotometric methods based on principal component regression and partial least squares calibrations were developed for determination of nebivolol (NEB) and hydrochlorothiazide (HCTZ) in formulations without prior separation or masking. The HPLC assay utilized a Phenomenex-Luna RP-18(2) 250 x 4.6 mm, 5 microm column with acetonitrile--0.03% aqueous formic acid, pH 3.3 (65 + 35, v/v), mobile phase at a flow rate of 1.0 mL/min, and UV detection at 277 nm. The retention times of NEB and HCTZ were 2.133 and 2.877 min, respectively. The total run time was < 4 min. Chemometric calibrations were constructed by using an absorption data matrix corresponding to a concentration data matrix, with measurements in the range of 231-310 nm (Delta lambda = 1 nm) in their zero-order spectra using 16 samples in a training set. The chemometric numerical computations were obtained by using R-Software Environment (Version 2.1.1). The proposed methods were validated for various International Conference on Harmonization regulatory parameters like linearity, range, accuracy, precision, robustness, LOD, LOQ, and HPLC system suitability. Laboratory-prepared mixtures and commercial tablet formulations were successfully analyzed using the developed methods. All results were acceptable and confirmed that the method is suitable for its intended use.

  14. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  15. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  16. Calibration of Germanium Resistance Thermometers

    NASA Technical Reports Server (NTRS)

    Ladner, D.; Urban, E.; Mason, F. C.

    1987-01-01

    Largely completed thermometer-calibration cryostat and probe allows six germanium resistance thermometers to be calibrated at one time at superfluid-helium temperatures. In experiments involving several such thermometers, use of this calibration apparatus results in substantial cost savings. Cryostat maintains temperature less than 2.17 K through controlled evaporation and removal of liquid helium from Dewar. Probe holds thermometers to be calibrated and applies small amount of heat as needed to maintain precise temperature below 2.17 K.

  17. Transfer of multivariate regression models between high-resolution NMR instruments: application to authenticity control of sunflower lecithin.

    PubMed

    Monakhova, Yulia B; Diehl, Bernd W K

    2016-03-22

    In recent years the number of spectroscopic studies utilizing multivariate techniques and involving different laboratories has been dramatically increased. In this paper the protocol for calibration transfer of partial least square regression model between high-resolution nuclear magnetic resonance (NMR) spectrometers of different frequencies and equipped with different probes was established. As the test system previously published quantitative model to predict the concentration of blended soy species in sunflower lecithin was used. For multivariate modelling piecewise direct standardization (PDS), direct standardization, and hybrid calibration were employed. PDS showed the best performance for estimating lecithin falsification regarding its vegetable origin resulting in a significant decrease in root mean square error of prediction from 5.0 to 7.3% without standardization to 2.9-3.2% for PDS. Acceptable calibration transfer model was obtained by direct standardization, but this standardization approach introduces unfavourable noise to the spectral data. Hybrid calibration is least recommended for high-resolution NMR data. The sensitivity of instrument transfer methods with respect to the type of spectrometer, the number of samples and the subset selection was also discussed. The study showed the necessity of applying a proper standardization procedure in cases when multivariate model has to be applied to the spectra recorded on a secondary NMR spectrometer even with the same magnetic field strength. Copyright © 2016 John Wiley & Sons, Ltd.

  18. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  19. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels.

    PubMed

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-11-15

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH₂O EC-10, ECH₂O EC-20, ECH₂O EC-5, and ECH₂O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH₂O EC-5 and ECH₂O TE, which also performed surprisingly well in saline conditions.

  20. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels

    PubMed Central

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-01-01

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263

  1. Detrended fluctuation analysis of multivariate time series

    NASA Astrophysics Data System (ADS)

    Xiong, Hui; Shang, P.

    2017-01-01

    In this work, we generalize the detrended fluctuation analysis (DFA) to the multivariate case, named multivariate DFA (MVDFA). The validity of the proposed MVDFA is illustrated by numerical simulations on synthetic multivariate processes, where the cases that initial data are generated independently from the same system and from different systems as well as the correlated variate from one system are considered. Moreover, the proposed MVDFA works well when applied to the multi-scale analysis of the returns of stock indices in Chinese and US stock markets. Generally, connections between the multivariate system and the individual variate are uncovered, showing the solid performances of MVDFA and the multi-scale MVDFA.

  2. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    PubMed

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.

  3. Applying recovery biomarkers to calibrate self-report measures of sodium and potassium in the Hispanic Community Health Study/Study of Latinos.

    PubMed

    Mossavar-Rahmani, Y; Sotres-Alvarez, D; Wong, W W; Loria, C M; Gellman, M D; Van Horn, L; Alderman, M H; Beasley, J M; Lora, C M; Siega-Riz, A M; Kaplan, R C; Shaw, P A

    2017-02-16

    Measurement error in assessment of sodium and potassium intake obscures associations with health outcomes. The level of this error in a diverse US Hispanic/Latino population is unknown. We investigated the measurement error in self-reported dietary intake of sodium and potassium and examined differences by background (Central American, Cuban, Dominican, Mexican, Puerto Rican and South American). In 2010-2012, we studied 447 participants aged 18-74 years from four communities (Miami, Bronx, Chicago and San Diego), obtaining objective 24-h urinary sodium and potassium excretion measures. Self-report was captured from two interviewer-administered 24-h dietary recalls. Twenty percent of the sample repeated the study. We examined bias in self-reported sodium and potassium from diet and the association of mismeasurement with participant characteristics. Linear regression relating self-report with objective measures was used to develop calibration equations. Self-report underestimated sodium intake by 19.8% and 20.8% and potassium intake by 1.3% and 4.6% in men and women, respectively. Sodium intake underestimation varied by Hispanic/Latino background (P<0.05) and was associated with higher body mass index (BMI). Potassium intake underestimation was associated with higher BMI, lower restaurant score (indicating lower consumption of foods prepared away from home and/or eaten outside the home) and supplement use. The R(2) was 19.7% and 25.0% for the sodium and potassium calibration models, respectively, increasing to 59.5 and 61.7% after adjusting for within-person variability in each biomarker. These calibration equations, corrected for subject-specific reporting error, have the potential to reduce bias in diet-disease associations within this largest cohort of Hispanics in the United States.Journal of Human Hypertension advance online publication, 16 February 2017; doi:10.1038/jhh.2016.98.

  4. GPI Calibrations

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.

    2017-09-01

    "The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."

  5. Weighted partial least squares method to improve calibration precision for spectroscopic noise-limited data

    SciTech Connect

    Haaland, D.M.; Jones, H.D.T.

    1997-09-01

    Multivariate calibration methods have been applied extensively to the quantitative analysis of Fourier transform infrared (FT-IR) spectral data. Partial least squares (PLS) methods have become the most widely used multivariate method for quantitative spectroscopic analyses. Most often these methods are limited by model error or the accuracy or precision of the reference methods. However, in some cases, the precision of the quantitative analysis is limited by the noise in the spectroscopic signal. In these situations, the precision of the PLS calibrations and predictions can be improved by the incorporation of weighting in the PLS algorithm. If the spectral noise of the system is known (e.g., in the case of detector-noise-limited cases), then appropriate weighting can be incorporated into the multivariate spectral calibrations and predictions. A weighted PLS (WPLS) algorithm was developed to improve the precision of the analysis in the case of spectral-noise-limited data. This new PLS algorithm was then tested with real and simulated data, and the results compared with the unweighted PLS algorithm. Using near-infrared (NIR) calibration precision when the WPLS algorithm was applied. The best WPLS method improved prediction precision for the analysis of one of the minor components by a factor of nearly 9 relative to the unweighted PLS algorithm.

  6. A new self-calibration method applied to TOMS and SBUV backscattered ultraviolet data to determine long-term global ozone change

    SciTech Connect

    Herman, J.R.; Hudson, R.; McPeters, R.; Stolarski, R. ); Ahmad, Z.; Gu, X.Y., Taylor, S.; Wellemeyer, C. )

    1991-04-20

    The currently archived (1989) total ozone mapping spectrometer (TOMS) and solar backscattered ultraviolet (SBUV) total ozone data (version 5) show a global average decrease of about 9.0% from November 1978 to November 1988. This large decrease disagrees with an approximate 3.5% decrease estimated from the ground-based Dobson network. The primary source of disagreement was found to arise from an overestimate of reflectivity change and its incorrect wavelengths dependence for the diffuser plate used when measuring solar irradiance. For total ozone measured by TOMS, a means has been found to use the measured radiance-irradiance ratio from several wavelengths pairs to construct an internally self consistent calibration. The method uses the wavelength dependence of the sensitivity to calibration errors and the requirement that albedo ratios for each wavelength pair yield the same total ozone amounts. Smaller errors in determining spacecraft attitude, synchronization problems with the photon counting electronics, and sea glint contamination of boundary reflectivity data have been corrected or minimized. New climatological low-ozone profiles have been incorporated into the TOMS algorithm that are appropriate for Antarctic ozone hole conditions and other low ozone cases. The combined corrections have led to a new determination of the global average total ozone trend (version 6) as a 2.9 {plus minus} 1.3% decrease over 11 years. Version 6 data are shown to be in agreement within error limits with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii.

  7. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  8. Multivariate meta-analysis: potential and promise.

    PubMed

    Jackson, Dan; Riley, Richard; White, Ian R

    2011-09-10

    The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta-analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day 'Multivariate meta-analysis' event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta-analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta-analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta-analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice.

  9. Anemometer calibrator

    NASA Technical Reports Server (NTRS)

    Bate, T.; Calkins, D. E.; Price, P.; Veikins, O.

    1971-01-01

    Calibrator generates accurate flow velocities over wide range of gas pressure, temperature, and composition. Both pressure and flow velocity can be maintained within 0.25 percent. Instrument is essentially closed loop hydraulic system containing positive displacement drive.

  10. Multivariable Control Systems

    DTIC Science & Technology

    1968-01-01

    one). Examples abound of systems with numerous controlled variables, and the modern tendency is toward ever greater utilization of systems and plants of this kind. We call them multivariable control systems (MCS).

  11. Code Calibration Applied to the TCA High-Lift Model in the 14 x 22 Wind Tunnel (Simulation With and Without Model Post-Mount)

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    1999-01-01

    The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).

  12. Code Calibration Applied to the TCA High-Lift Model in the 14 x 22 Wind Tunnel (Simulation With and Without Model Post-Mount)

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    1999-01-01

    The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).

  13. STIS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hulbert, S.; Hodge, P.; Lindler, D.; Shaw, R.; Goudfrooij, P.; Katsanis, R.; Keener, S.; McGrath, M.; Bohlin, R.; Baum, S.

    1997-05-01

    Routine calibration of STIS observations in the HST data pipeline is performed by the CALSTIS task. CALSTIS can: subtract the over-scan region and a bias image from CCD observations; remove cosmic ray features from CCD observations; correct global nonlinearities for MAMA observations; subtract a dark image; and, apply flat field corrections. In the case of spectral data, CALSTIS can also: assign a wavelength to each pixel; apply a heliocentric correction to the wavelengths; convert counts to absolute flux; process the automatically generated spectral calibration lamp observations to improve the wavelength solution; rectify two-dimensional (longslit) spectra; subtract interorder and sky background; and, extract one-dimensional spectra. CALSTIS differs in significant ways from the current HST calibration tasks. The new code is written in ANSI C and makes use of a new C interface to IRAF. The input data, reference data, and output calibrated data are all in FITS format, using IMAGE or BINTABLE extensions. Error estimates are computed and include contributions from the reference images. The entire calibration can be performed by one task, but many steps can also be performed individually.

  14. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  15. Calibration Techniques

    NASA Astrophysics Data System (ADS)

    Wurz, Peter; Balogh, Andre; Coffey, Victoria; Dichter, Bronislaw K.; Kasprzak, Wayne T.; Lazarus, Alan J.; Lennartsson, Walter; McFadden, James P.

    Calibration and characterization of particle instruments with supporting flight electronics is necessary for the correct interpretation of the returned data. Generally speaking, the instrument will always return a measurement value (typically in form of a digital number), for example a count rate, for the measurement of an external quantity, which could be an ambient neutral gas density, an ion composition (species measured and amount), or electron density. The returned values are used then to derive parameters associated with the distribution such as temperature, bulk flow speed, differential energy flux and others. With the calibration of the instrument the direct relationship between the external quantity and the returned measurement value has to be established so that the data recorded during flight can be correctly interpreted. While calibration and characterization of an instrument are usually done in ground-based laboratories prior to integration of the instrument in the spacecraft, it can also be done in space.

  16. A Precipitation Satellite Downscaling & Re-Calibration Routine for TRMM 3B42 and GPM Data Applied to the Tropical Andes

    NASA Astrophysics Data System (ADS)

    Manz, B.; Buytaert, W.; Tobón, C.; Villacis, M.; García, F.

    2014-12-01

    With the imminent release of GPM it is essential for the hydrological user community to improve the spatial resolution of satellite precipitation products (SPPs), also retrospectively of historical time-series. Despite the growing number of applications, to date SPPs have two major weaknesses. Firstly, geosynchronous infrared (IR) SPPs, relying exclusively on cloud elevation/ IR temperature, fail to replicate ground rainfall rates especially for convective rainfall. Secondly, composite SPPs like TRMM include microwave and active radar to overcome this, but the coarse spatial resolution (0.25°) from infrequent orbital sampling often fails to: a) characterize precipitation patterns (especially extremes) in complex topography regions, and b) allow for gauge comparisons with adequate spatial support. This is problematic for satellite-gauge merging and subsequent hydrological modelling applications. We therefore present a new re-calibration and downscaling routine that is applicable to 0.25°/ 3-hrly TRMM 3B42 and Level 3 GPM time-series to generate 1 km estimates. 16 years of instantaneous TRMM radar (TPR) images were evaluated against a unique dataset of over 100 10-min rain gauges from the tropical Andes (Colombia & Ecuador) to develop a spatially distributed error surface. Long-term statistics on occurrence frequency, convective/ stratiform fraction and extreme precipitation probability (Gamma & Generalized Pareto distributions) were computed from TPR at the 1 km scale as well as from TPR and 3B42 at the 0.25° scale. To downscale from 0.25° to 1 km a stochastic generator was used to restrict precipitation occurrence to a fraction of the 1 km pixels within the 0.25° gridcell at every time-step. Regression modelling established a relationship between probability distributions at the 0.25° scale and rainfall amounts were assigned to the retained 1 km pixels by quantile-matching to the gridcell. The approach inherently provides mass conservation of the downscaled

  17. Sensitivity equation for quantitative analysis with multivariate curve resolution-alternating least-squares: theoretical and experimental approach.

    PubMed

    Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C

    2012-10-16

    A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.

  18. Image Calibration

    NASA Technical Reports Server (NTRS)

    Peay, Christopher S.; Palacios, David M.

    2011-01-01

    Calibrate_Image calibrates images obtained from focal plane arrays so that the output image more accurately represents the observed scene. The function takes as input a degraded image along with a flat field image and a dark frame image produced by the focal plane array and outputs a corrected image. The three most prominent sources of image degradation are corrected for: dark current accumulation, gain non-uniformity across the focal plane array, and hot and/or dead pixels in the array. In the corrected output image the dark current is subtracted, the gain variation is equalized, and values for hot and dead pixels are estimated, using bicubic interpolation techniques.

  19. Multivariate processing strategies for enhancing qualitative and quantitative analysis based on infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Wan, Boyong

    2007-12-01

    Airborne passive Fourier transform infrared spectrometry is gaining increased attention in environmental applications because of its great flexibility. Usually, pattern recognition techniques are used for automatic analysis of large amount of collected data. However, challenging problems are the constantly changing background and high calibration cost. As aircraft is flying, background is always changing. Also, considering the great variety of backgrounds and high expense of data collection from aircraft, cost of collecting representative training data is formidable. Instead of using airborne data, data generated from simulation strategies can be used for training purposes. Training data collected under controlled conditions on the ground or synthesized from real backgrounds can be both options. With both strategies, classifiers may be developed with much lower cost. For both strategies, signal processing techniques need to be used to extract analyte features. In this dissertation, signal processing methods are applied either in interferogram or spectral domain for features extraction. Then, pattern recognition methods are applied to develop binary classifiers for automated detection of air-collected methanol and ethanol vapors. The results demonstrate, with optimized signal processing methods and training set composition, classifiers trained from ground-collected or synthetic data can give good classification on real air-collected data. Near-infrared (NIR) spectrometry is emerging as a promising tool for noninvasive blood glucose detection. In combination with multivariate calibration techniques, NIR spectroscopy can give quick quantitative determinations of many species with minimal sample preparation. However, one main problem with NIR calibrations is degradation of calibration model over time. The varying background information will worsen the prediction precision and complicate the multivariate models. To mitigate the needs for frequent recalibration and

  20. Multivariate residues and maximal unitarity

    NASA Astrophysics Data System (ADS)

    Søgaard, Mads; Zhang, Yang

    2013-12-01

    We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.

  1. Multivariate Data EXplorer (MDX)

    SciTech Connect

    Steed, Chad Allen

    2012-08-01

    The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views whereby selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.

  2. Multivariate bubbles and antibubbles

    NASA Astrophysics Data System (ADS)

    Fry, John

    2014-08-01

    In this paper we develop models for multivariate financial bubbles and antibubbles based on statistical physics. In particular, we extend a rich set of univariate models to higher dimensions. Changes in market regime can be explicitly shown to represent a phase transition from random to deterministic behaviour in prices. Moreover, our multivariate models are able to capture some of the contagious effects that occur during such episodes. We are able to show that declining lending quality helped fuel a bubble in the US stock market prior to 2008. Further, our approach offers interesting insights into the spatial development of UK house prices.

  3. Multivariate Data EXplorer (MDX)

    SciTech Connect

    Steed, Chad Allen

    2012-08-01

    The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views whereby selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.

  4. Three calibration factors, applied to a rapid sweeping method, can accurately estimate Aedes aegypti (Diptera: Culicidae) pupal numbers in large water-storage containers at all temperatures at which dengue virus transmission occurs.

    PubMed

    Romero-Vivas, C M E; Llinás, H; Falconar, A K I

    2007-11-01

    The ability of a simple sweeping method, coupled to calibration factors, to accurately estimate the total numbers of Aedes aegypti (L.) (Diptera: Culicidae) pupae in water-storage containers (20-6412-liter capacities at different water levels) throughout their main dengue virus transmission temperature range was evaluated. Using this method, one set of three calibration factors were derived that could accurately estimate the total Ae. aegypti pupae in their principal breeding sites, large water-storage containers, found throughout the world. No significant differences were obtained using the method at different altitudes (14-1630 m above sea level) that included the range of temperatures (20-30 degrees C) at which dengue virus transmission occurs in the world. In addition, no significant differences were found in the results obtained between and within the 10 different teams that applied this method; therefore, this method was extremely robust. One person could estimate the Ae. aegypti pupae in each of the large water-storage containers in only 5 min by using this method, compared with two people requiring between 45 and 90 min to collect and count the total pupae population in each of them. Because the method was both rapid to perform and did not disturb the sediment layers in these domestic water-storage containers, it was more acceptable by the residents, and, therefore, ideally suited for routine surveillance purposes and to assess the efficacy of Ae. aegypti control programs in dengue virus-endemic areas throughout the world.

  5. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  6. SNLS calibrations

    NASA Astrophysics Data System (ADS)

    Regnault, N.

    2015-08-01

    The Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) is a massive imaging survey, conducted between 2003 and 2008, with the MegaCam instrument, mounted on the CFHT-3.6-m telescope. With a 1 degree wide focal plane, made of 36 2048 × 4612 sensors totalling 340 megapixels, MegaCam was at the time the largest imager on the sky. The Supernova Legacy Survey (SNLS) uses the cadenced observations of the 4 deg2 wide "DEEP" layer of the CFHTLS to search and follow-up Type Ia supernovae (SNe Ia) and study the acceleration of the cosmic expansion. The reduction and calibration of the CFHTLS/SNLS datasets has posed a series of challenges. In what follows, we give a brief account of the photometric calibration work that has been performed on the SNLS data over the last decade.

  7. Temperature Calibration

    NASA Astrophysics Data System (ADS)

    Wang, A. L.

    2013-12-01

    Accuracy of temperature measurements is vital to many experiments. In this project, we design an algorithm to calibrate thermocouples' temperature measurements. To collect data, we rely on incremental heating to calculate the diffusion coefficients of argon through sanidine glasses. These coefficients change according to an arrhenius equation that depends on temperature, time, and the size and geometry of the glass; thus by fixing the type of glass and the time of each heating step, we obtain many data points by varying temperature. Because the dimension of temperature is continuous, obtaining data is simpler in noble gas diffusion experiments than in measuring the discrete melting points of various metals. Due to the nature of electrical connections, the need to reference to the freezing point of ice, thermal gradients in the sample, the time dependent dissipation of heat into the surroundings, and other inaccuracies with thermocouple temperature measurements, it is necessary to calibrate the experimental measurements with the expected or theoretical measurements. Since the diffusion constant equation is exponential with the inverse of temperature, we transform the exponential D vs T graph into a linear log(D) vs 1/T graph. Then a simple linear regression yields the equation of the line, and we find a mapping function from the experimental temperature to the expected temperature. By relying on the accuracy of the diffusion constant measurement, the mapping function provides the temperature calibration. Theoretical (Temperature, Diffusion Coefficient, Fractional Loss, Zeta)

  8. Multivariable control of a rapid thermal processor using ultrasonic sensors

    NASA Astrophysics Data System (ADS)

    Dankoski, Paul C. P.

    The semiconductor manufacturing industry faces the need for tighter control of thermal budget and process variations as circuit feature sizes decrease. Strategies to meet this need include supervisory control, run-to-run control, and real-time feedback control. Typically, the level of control chosen depends upon the actuation and sensing available. Rapid Thermal Processing (RTP) is one step of the manufacturing cycle requiring precise temperature control and hence real-time feedback control. At the outset of this research, the primary ingredient lacking from in-situ RTP temperature control was a suitable sensor. This research looks at an alternative to the traditional approach of pyrometry, which is limited by the unknown and possibly time-varying wafer emissivity. The technique is based upon the temperature dependence of the propagation time of an acoustic wave in the wafer. The aim of this thesis is to evaluate the ultrasonic sensors as a potentially viable sensor for control in RTP. To do this, an experimental implementation was developed at the Center for Integrated Systems. Because of the difficulty in applying a known temperature standard in an RTP environment, calibration to absolute temperature is nontrivial. Given reference propagation delays, multivariable model-based feedback control is applied to the system. The modelling and implementation details are described. The control techniques have been applied to a number of research processes including rapid thermal annealing and rapid thermal crystallization of thin silicon films on quartz/glass substrates.

  9. Transient multivariable sensor evaluation

    DOEpatents

    Vilim, Richard B.; Heifetz, Alexander

    2017-02-21

    A method and system for performing transient multivariable sensor evaluation. The method and system includes a computer system for identifying a model form, providing training measurement data, generating a basis vector, monitoring system data from sensor, loading the system data in a non-transient memory, performing an estimation to provide desired data and comparing the system data to the desired data and outputting an alarm for a defective sensor.

  10. Multivariate Quantitative Chemical Analysis

    NASA Technical Reports Server (NTRS)

    Kinchen, David G.; Capezza, Mary

    1995-01-01

    Technique of multivariate quantitative chemical analysis devised for use in determining relative proportions of two components mixed and sprayed together onto object to form thermally insulating foam. Potentially adaptable to other materials, especially in process-monitoring applications in which necessary to know and control critical properties of products via quantitative chemical analyses of products. In addition to chemical composition, also used to determine such physical properties as densities and strengths.

  11. Multivariate meta-analysis: Potential and promise

    PubMed Central

    Jackson, Dan; Riley, Richard; White, Ian R

    2011-01-01

    The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta-analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day ‘Multivariate meta-analysis’ event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta-analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta-analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta-analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21268052

  12. Calibrating Gamma Ray Bursts from SN Ia

    NASA Astrophysics Data System (ADS)

    Montiel, Ariadna; Bretón, Nora

    2011-10-01

    To consider GRBs as standard candles, the circularity problem should be surmounted. To do this GRBs are calibrated at low redshifts using SNIa data and then extrapolating the calibration to higher redshifts. In this work we apply GRBs calibration to estimate the Hubble parameter, H(z), from the luminosity distance extracted from the calibration and, knowing H(z), we study the parameter w(z) of the equation of state of dark energy.

  13. ALTEA calibration

    NASA Astrophysics Data System (ADS)

    Zaconte, V.; Altea Team

    The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to

  14. Multivariate image analysis in biomedicine.

    PubMed

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  15. Introduction to multivariate discrimination

    NASA Astrophysics Data System (ADS)

    Kégl, Balázs

    2013-07-01

    Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either

  16. Multivariate volume rendering

    SciTech Connect

    Crawfis, R.A.

    1996-03-01

    This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.

  17. Multivariate polynomial interpolation under projectivities III

    NASA Astrophysics Data System (ADS)

    Mühlbach, G.; Gasca, M.

    1994-03-01

    This is the third part of a note on multivariate interpolation. Some remainder formulas for interpolation on knot sets that are perspective images of standard lower data sets are given. They apply to all knot systems considered in parts I and II.

  18. Multisite randomised controlled trial to evaluate polypropylene clips applied to the breech of lambs as an alternative to mulesing. II: multivariate analysis of relationships between clip treatment and operator, sheep, farm and environmental factors.

    PubMed

    Rabiee, A R; Playford, M C; Evans, I; Lindon, G; Stevenson, M; Lean, I J

    2012-11-01

    A multivariate analysis approach was used to evaluate both the effects of application of occlusive polypropylene clips to the breech on bare area measurements and scores of lambs, and the influence of operator, region, sheep, farm and environmental factors on outcomes. A randomised controlled trial using 32,028 lambs was conducted on 208 commercial wool-growing properties across Australia. Differences in bare area measurements and scores between groups were estimated and analysed using a mixed model to investigate the effects of operator differences, farm and environmental factors and the interactions among these factors. Clip-treated lambs with higher body weight at visit 1 had higher bare area measures and scores, but lower changes in dag and urine scores. Lambs with tight skin showed improved response in bare area scores and measurements after clip treatment, but lambs with a high wrinkle score at visit 1 showed less response to the treatment in their urine, dag and wrinkle and bare area scores. These effects of the clip treatment were not significantly influenced by estimated fleece fibre diameter, operator or region, but were significantly influenced by farm. The effect of occlusive clips on breech measurements and scores was significantly influenced by body weight, skin type and thickness, wrinkle score and sex of the lamb, but not by region, operator or estimated fibre diameter. The clip treatment significantly improved characteristics that influence the susceptibility of lambs to flystrike under most conditions. © 2012 The Authors. Australian Veterinary Journal © 2012 Australian Veterinary Association.

  19. Method of multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2004-01-06

    A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).

  20. TIME CALIBRATED OSCILLOSCOPE SWEEP

    DOEpatents

    Owren, H.M.; Johnson, B.M.; Smith, V.L.

    1958-04-22

    The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.

  1. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  2. Schmidt decomposition and multivariate statistical analysis

    NASA Astrophysics Data System (ADS)

    Bogdanov, Yu. I.; Bogdanova, N. A.; Fastovets, D. V.; Luckichev, V. F.

    2016-12-01

    The new method of multivariate data analysis based on the complements of classical probability distribution to quantum state and Schmidt decomposition is presented. We considered Schmidt formalism application to problems of statistical correlation analysis. Correlation of photons in the beam splitter output channels, when input photons statistics is given by compound Poisson distribution is examined. The developed formalism allows us to analyze multidimensional systems and we have obtained analytical formulas for Schmidt decomposition of multivariate Gaussian states. It is shown that mathematical tools of quantum mechanics can significantly improve the classical statistical analysis. The presented formalism is the natural approach for the analysis of both classical and quantum multivariate systems and can be applied in various tasks associated with research of dependences.

  3. mmm: an R package for analyzing multivariate longitudinal data with multivariate marginal models.

    PubMed

    Asar, Özgür; İlk, Özlem

    2013-12-01

    Modeling multivariate longitudinal data has many challenges in terms of both statistical and computational aspects. Statistical challenges occur due to complex dependence structures. Computational challenges are due to the complex algorithms, the use of numerical methods, and potential convergence problems. Therefore, there is a lack of software for such data. This paper introduces an R package mmm prepared for marginal modeling of multivariate longitudinal data. Parameter estimations are achieved by generalized estimating equations approach. A real life data set is applied to illustrate the core features of the package, and sample R code snippets are provided. It is shown that the multivariate marginal models considered in this paper and mmm are valid for binary, continuous and count multivariate longitudinal responses.

  4. Relationship between Multiple Regression and Selected Multivariable Methods.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…

  5. Muon Energy Calibration of the MINOS Detectors

    SciTech Connect

    Miyagawa, Paul S.

    2004-01-01

    MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.

  6. Development of a univariate calibration model for pharmaceutical analysis based on NIR spectra.

    PubMed

    Blanco, M; Cruz, J; Bautista, M

    2008-12-01

    Near-infrared spectroscopy (NIRS) has been widely used in the pharmaceutical field because of its ability to provide quality information about drugs in near-real time. In practice, however, the NIRS technique requires construction of multivariate models in order to correct collinearity and the typically poor selectivity of NIR spectra. In this work, a new methodology for constructing simple NIR calibration models has been developed, based on the spectrum for the target analyte (usually the active principle ingredient, API), which is compared with that of the sample in order to calculate a correlation coefficient. To this end, calibration samples are prepared spanning an adequate concentration range for the API and their spectra are recorded. The model thus obtained by relating the correlation coefficient to the sample concentration is subjected to least-squares regression. The API concentration in validation samples is predicted by interpolating their correlation coefficients in the straight calibration line previously obtained. The proposed method affords quantitation of API in pharmaceuticals undergoing physical changes during their production process (e.g. granulates, and coated and non-coated tablets). The results obtained with the proposed methodology, based on correlation coefficients, were compared with the predictions of PLS1 calibration models, with which a different model is required for each type of sample. Error values lower than 1-2% were obtained in the analysis of three types of sample using the same model; these errors are similar to those obtained by applying three PLS models for granules, and non-coated and coated samples. Based on the outcome, our methodology is a straightforward choice for constructing calibration models affording expeditious prediction of new samples with varying physical properties. This makes it an effective alternative to multivariate calibration, which requires use of a different model for each type of sample, depending on

  7. Traceable Pyrgeometer Calibrations

    SciTech Connect

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  8. Angles of multivariable root loci

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.; Stein, G.; Laub, A. J.

    1982-01-01

    A generalized eigenvalue problem is demonstrated to be useful for computing the multivariable root locus, particularly when obtaining the arrival angles to finite transmission zeros. The multivariable root loci are found for a linear, time-invariant output feedback problem. The problem is then employed to compute a closed-loop eigenstructure. The method of computing angles on the root locus is demonstrated, and the method is extended to a multivariable optimal root locus.

  9. Calibration of sound calibrators: an overview

    NASA Astrophysics Data System (ADS)

    Milhomem, T. A. B.; Soares, Z. M. D.

    2016-07-01

    This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.

  10. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    PubMed

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.

  11. Methodological challenges to multivariate syndromic surveillance: a case study using Swiss animal health data.

    PubMed

    Vial, Flavie; Wei, Wei; Held, Leonhard

    2016-12-20

    In an era of ubiquitous electronic collection of animal health data, multivariate surveillance systems (which concurrently monitor several data streams) should have a greater probability of detecting disease events than univariate systems. However, despite their limitations, univariate aberration detection algorithms are used in most active syndromic surveillance (SyS) systems because of their ease of application and interpretation. On the other hand, a stochastic modelling-based approach to multivariate surveillance offers more flexibility, allowing for the retention of historical outbreaks, for overdispersion and for non-stationarity. While such methods are not new, they are yet to be applied to animal health surveillance data. We applied an example of such stochastic model, Held and colleagues' two-component model, to two multivariate animal health datasets from Switzerland. In our first application, multivariate time series of the number of laboratories test requests were derived from Swiss animal diagnostic laboratories. We compare the performance of the two-component model to parallel monitoring using an improved Farrington algorithm and found both methods yield a satisfactorily low false alarm rate. However, the calibration test of the two-component model on the one-step ahead predictions proved satisfactory, making such an approach suitable for outbreak prediction. In our second application, the two-component model was applied to the multivariate time series of the number of cattle abortions and the number of test requests for bovine viral diarrhea (a disease that often results in abortions). We found that there is a two days lagged effect from the number of abortions to the number of test requests. We further compared the joint modelling and univariate modelling of the number of laboratory test requests time series. The joint modelling approach showed evidence of superiority in terms of forecasting abilities. Stochastic modelling approaches offer the

  12. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  13. Influence of temperature on vibrational spectra and consequences for the predictive ability of multivariate models.

    PubMed

    Wülfert, F; Kok, W T; Smilde, A K

    1998-05-01

    Temperature, pressure, viscosity, and other process variables fluctuate during an industrial process. When vibrational spectra are measured on- or in-line for process analytical and control purposes, the fluctuations influence the shape of the spectra in a nonlinear manner. The influence of these temperature-induced spectral variations on the predictive ability of multivariate calibration model is assessed. Short-wave NIR spectra of ethanol/water/2-propanol mixtures are taken at different temperatures, and different local and global partial least-squares calibration strategies are applied. The resulting prediction errors and sensitivity vectors of a test set are compared. For data with no temperature variation, the local models perform best with high sensitivity but the knowledge of the temperature for prediction measurements cannot aid in the improvement of local model predictions when temperature variation is introduced. The prediction errors of global models are considerably lower when temperature variation is present in the data set but at the expense of sensitivity. To be able to build temperature-stable calibration models with high sensitivity, a way of explicitly modeling the temperature should be found.

  14. Alternative calibration techniques for counteracting the matrix effects in GC-MS-SPE pesticide residue analysis - a statistical approach.

    PubMed

    Rimayi, Cornelius; Odusanya, David; Mtunzi, Fanyana; Tsoka, Shepherd

    2015-01-01

    This paper investigates the efficiency of application of four different multivariate calibration techniques, namely matrix-matched internal standard (MMIS), matrix-matched external standard (MMES), solvent-only internal standard (SOIS) and solvent-only external standard (SOES) on the detection and quantification of 20 organochlorine compounds from high, low and blank matrix water sample matrices by Gas Chromatography-Mass Spectrometry (GC-MS) coupled to solid phase extraction (SPE). Further statistical testing, using Statistical Package for the Social Science (SPSS) by applying MANOVA, T-tests and Levene's F tests indicates that matrix composition has a more significant effect on the efficiency of the analytical method than the calibration method of choice. Matrix effects are widely described as one of the major sources of errors in GC-MS multiresidue analysis. Descriptive and inferential statistics proved that the matrix-matched internal standard calibration was the best approach to use for samples of varying matrix composition as it produced the most precise average mean recovery of 87% across all matrices tested. The use of an internal standard calibration overall produced more precise total recoveries than external standard calibration, with mean values of 77% and 64% respectively. The internal standard calibration technique produced a particularly high overall standard deviation of 38% at 95% confidence level indicating that it is less robust than the external standard calibration method which had an overall standard error of 32% at 95% confidence level. Overall, the matrix-matched external standard calibration proved to be the best calibration approach for analysis of low matrix samples which consisted of the real sample matrix as it had the most precise recovery of 98% compared to other calibration approaches for the low-matrix samples.

  15. Quality evaluation and prediction of Citrullus lanatus by 1H NMR-based metabolomics and multivariate analysis.

    PubMed

    Tarachiwin, Lucksanaporn; Masako, Osawa; Fukusaki, Eiichiro

    2008-07-23

    (1)H NMR spectrometry in combination with multivariate analysis was considered to provide greater information on quality assessment over an ordinary sensory testing method due to its high reliability and high accuracy. The sensory quality evaluation of watermelon (Citrullus lanatus (Thunb.) Matsum. & Nakai) was carried out by means of (1)H NMR-based metabolomics. Multivariate analyses by partial least-squares projections to latent structures-discrimination analysis (PLS-DA) and PLS-regression offered extensive information for quality differentiation and quality evaluation, respectively. The impact of watermelon and rootstock cultivars on the sensory qualities of watermelon was determined on the basis of (1)H NMR metabolic fingerprinting and profiling. The significant metabolites contributing to the discrimination were also identified. A multivariate calibration model was successfully constructed by PLS-regression with extremely high reliability and accuracy. Thus, (1)H NMR-based metabolomics with multivariate analysis was considered to be one of the most suitable complementary techniques that could be applied to assess and predict the sensory quality of watermelons and other horticultural plants.

  16. Multivariate analysis of longitudinal rates of change.

    PubMed

    Bryan, Matthew; Heagerty, Patrick J

    2016-12-10

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Bayesian Calibration of Microsimulation Models.

    PubMed

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  18. Quantitative analysis of essential oils in perfume using multivariate curve resolution combined with comprehensive two-dimensional gas chromatography.

    PubMed

    de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio

    2011-08-05

    The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data.

  19. Calibrating the MINERvA detector

    NASA Astrophysics Data System (ADS)

    Mousseau, Joel; Minerva Collaboration

    2012-08-01

    The MINERvA experiment, located at Fermilab, will use the NuMI beam line for measuring neutrino-nucleus interaction rates with very high precision. In order to obtain the unprecedented precision MINERvA is capable of, sophisticated calibration techniques are applied both prior to installation and in situ.Calibration of PMT gains and scintillator response is discussed.

  20. Multivariate Bias Correction Procedures for Improving Water Quality Predictions using Mechanistic Models

    NASA Astrophysics Data System (ADS)

    Libera, D.; Arumugam, S.

    2015-12-01

    Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.

  1. Simultaneous determination of nifuroxazide and drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis.

    PubMed

    Metwally, Fadia H

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12microgml(-1) of NIF and 2-8microgml(-1) of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  2. Simultaneous determination of Nifuroxazide and Drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis

    NASA Astrophysics Data System (ADS)

    Metwally, Fadia H.

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  3. Phase A: calibration concepts for HIRES

    NASA Astrophysics Data System (ADS)

    Huke, Philipp; Origlia, Livia; Riva, Marco; Charsley, Jake; McCracken, Richard; Reid, Derryck; Kowzan, Grzegorz; Maslowski, Piotr; Disseau, Karen; Schäfer, Sebastian; Broeg, Christopher; Sarajlic, Mirsad; Dolon, François; Korhonen, Heidi; Reiners, Ansgar; Boisse, Isabelle; Perruchot, Sandrine; Ottogalli, Sebastien; Pepe, Francesco; Oliva, Ernesto

    2017-06-01

    The instrumentation plan for the E-ELT foresees a High Resolution Spectrograph (HIRES). Among its main goals are the detection of atmospheres of exoplanets and the determination of fundamental physical constants. For this, high radial velocity precision and accuracy are required. HIRES will be designed for maximum intrinsic stability. Systematic errors from effects like intrapixel variations or random errors like fiber noise need to be calibrated. Based on the main requirements for the calibration of HIRES, we discuss different potential calibration sources and how they can be applied. We outline the frequency calibration concept for HIRES using these sources.

  4. TOF PET offset calibration from clinical data

    NASA Astrophysics Data System (ADS)

    Werner, M. E.; Karp, J. S.

    2013-06-01

    In this paper, we present a timing calibration technique for time-of-flight positron emission tomography (TOF PET) that eliminates the need for a specialized data acquisition. By eliminating the acquisition, the process becomes fully automated, and can be performed with any clinical data set and whenever computing resources are available. It also can be applied retroactively to datasets for which a TOF offset calibration is missing or suboptimal. Since the method can use an arbitrary data set to perform a calibration prior to a TOF reconstruction, possibly of the same data set, one also can view this as reconstruction from uncalibrated data. We present a performance comparison with existing calibration techniques.

  5. Image based autodocking without calibration

    SciTech Connect

    Sutanto, H.; Sharma, R.; Varma, V.

    1997-03-01

    The calibration requirements for visual servoing can make it difficult to apply in many real-world situations. One approach to image-based visual servoing without calibration is to dynamically estimate the image Jacobian and use it as the basis for control. However, with the normal motion of a robot toward the goal, the estimation of the image Jacobian deteriorates over time. The authors propose the use of additional exploratory motion to considerably improve the estimation of the image Jacobian. They study the role of such exploratory motion in a visual servoing task. Simulations and experiments with a 6-DOF robot are used to verify the practical feasibility of the approach.

  6. Multivariate analysis in thoracic research

    PubMed Central

    Mengual-Macenlle, Noemí; Marcos, Pedro J.; Golpe, Rafael

    2015-01-01

    Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use. PMID:25922743

  7. Multivariate analysis in thoracic research.

    PubMed

    Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego

    2015-03-01

    Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.

  8. Calibration of pneumotachographs using a calibrated syringe.

    PubMed

    Tang, Yongquan; Turner, Martin J; Yem, Johnny S; Baker, A Barry

    2003-08-01

    Pneumotachograph require frequent calibration. Constant-flow methods allow polynomial calibration curves to be derived but are time consuming. The iterative syringe stroke technique is moderately efficient but results in discontinuous conductance arrays. This study investigated the derivation of first-, second-, and third-order polynomial calibration curves from 6 to 50 strokes of a calibration syringe. We used multiple linear regression to derive first-, second-, and third-order polynomial coefficients from two sets of 6-50 syringe strokes. In part A, peak flows did not exceed the specified linear range of the pneumotachograph, whereas flows in part B peaked at 160% of the maximum linear range. Conductance arrays were derived from the same data sets by using a published algorithm. Volume errors of the calibration strokes and of separate sets of 70 validation strokes (part A) and 140 validation strokes (part B) were calculated by using the polynomials and conductance arrays. Second- and third-order polynomials derived from 10 calibration strokes achieved volume variability equal to or better than conductance arrays derived from 50 strokes. We found that evaluation of conductance arrays using the calibration syringe strokes yields falsely low volume variances. We conclude that accurate polynomial curves can be derived from as few as 10 syringe strokes, and the new polynomial calibration method is substantially more time efficient than previously published conductance methods.

  9. Hydraulic Calibrator for Strain-Gauge Balances

    NASA Technical Reports Server (NTRS)

    Skelly, Kenneth; Ballard, John

    1987-01-01

    Instrument for calibrating strain-gauge balances uses hydraulic actuators and load cells. Eliminates effects of nonparallelism, nonperpendicularity, and changes of cable directions upon vector sums of applied forces. Errors due to cable stretching, pulley friction, and weight inaccuracy also eliminated. New instrument rugged and transportable. Set up quickly. Developed to apply known loads to wind-tunnel models with encapsulated strain-gauge balances, also adapted for use in calibrating dynamometers, load sensors on machinery and laboratory instruments.

  10. Multivariate Visual Explanation for High Dimensional Datasets

    PubMed Central

    Barlowe, Scott; Zhang, Tianyi; Liu, Yujie; Yang, Jing; Jacobs, Donald

    2010-01-01

    Understanding multivariate relationships is an important task in multivariate data analysis. Unfortunately, existing multivariate visualization systems lose effectiveness when analyzing relationships among variables that span more than a few dimensions. We present a novel multivariate visual explanation approach that helps users interactively discover multivariate relationships among a large number of dimensions by integrating automatic numerical differentiation techniques and multidimensional visualization techniques. The result is an efficient workflow for multivariate analysis model construction, interactive dimension reduction, and multivariate knowledge discovery leveraging both automatic multivariate analysis and interactive multivariate data visual exploration. Case studies and a formal user study with a real dataset illustrate the effectiveness of this approach. PMID:20694164

  11. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  12. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  13. Application of principal component analysis-multivariate adaptive regression splines for the simultaneous spectrofluorimetric determination of dialkyltins in micellar media.

    PubMed

    Ghasemi, Jahan B; Zolfonoun, Ehsan

    2013-11-01

    A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Application of principal component analysis-multivariate adaptive regression splines for the simultaneous spectrofluorimetric determination of dialkyltins in micellar media

    NASA Astrophysics Data System (ADS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2013-11-01

    A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520 nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.

  15. Assessing causality in multivariate accident models.

    PubMed

    Elvik, Rune

    2011-01-01

    This paper discusses the application of operational criteria of causality to multivariate statistical models developed to identify sources of systematic variation in accident counts, in particular the effects of variables representing safety treatments. Nine criteria of causality serving as the basis for the discussion have been developed. The criteria resemble criteria that have been widely used in epidemiology. To assess whether the coefficients estimated in a multivariate accident prediction model represent causal relationships or are non-causal statistical associations, all criteria of causality are relevant, but the most important criterion is how well a model controls for potentially confounding factors. Examples are given to show how the criteria of causality can be applied to multivariate accident prediction models in order to assess the relationships included in these models. It will often be the case that some of the relationships included in a model can reasonably be treated as causal, whereas for others such an interpretation is less supported. The criteria of causality are indicative only and cannot provide a basis for stringent logical proof of causality. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Multivariate stochastic simulation with subjective multivariate normal distributions

    Treesearch

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  17. Practical guidelines for reporting results in single- and multi-component analytical calibration: a tutorial.

    PubMed

    Olivieri, Alejandro C

    2015-04-08

    Practical guidelines for reporting analytical calibration results are provided. General topics, such as the number of reported significant figures and the optimization of analytical procedures, affect all calibration scenarios. In the specific case of single-component or univariate calibration, relevant issues discussed in the present Tutorial include: (1) how linearity can be assessed, (2) how to correctly estimate the limits of detection and quantitation, (2) when and how standard addition should be employed, (3) how to apply recovery studies for evaluating accuracy and precision, and (4) how average prediction errors can be compared for different analytical methodologies. For multi-component calibration procedures based on multivariate data, pertinent subjects here included are the choice of algorithms, the estimation of analytical figures of merit (detection capabilities, sensitivity, selectivity), the use of non-linear models, the consideration of the model regression coefficients for variable selection, and the application of certain mathematical pre-processing procedures such as smoothing. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  19. Residual gas analyzer calibration

    NASA Technical Reports Server (NTRS)

    Lilienkamp, R. H.

    1972-01-01

    A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.

  20. Identification of multivariate linear systems

    SciTech Connect

    Griffith, J.M.

    1981-01-01

    This paper considers the problem of modeling multivariate linear systems where noisy output measurements are the only available data. The techniques presented are valid for a class of canonical forms. Results from several simulations demonstrate the capability for structure and parameter estimation.

  1. Multivariate Model of Infant Competence.

    ERIC Educational Resources Information Center

    Kierscht, Marcia Selland; Vietze, Peter M.

    This paper describes a multivariate model of early infant competence formulated from variables representing infant-environment transaction including: birthweight, habituation index, personality ratings of infant social orientation and task orientation, ratings of maternal responsiveness to infant distress and social signals, and observational…

  2. Fresh Biomass Estimation in Heterogeneous Grassland Using Hyperspectral Measurements and Multivariate Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.

    2014-12-01

    Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.

  3. Multichannel hierarchical image classification using multivariate copulas

    NASA Astrophysics Data System (ADS)

    Voisin, Aurélie; Krylov, Vladimir A.; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane

    2012-03-01

    This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered input images. For each class and each input channel, the class-conditional marginal probability density functions are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step, we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.

  4. Tau identification using multivariate techniques in ATLAS

    NASA Astrophysics Data System (ADS)

    O'Neil, D. C.; ATLAS Collaboration

    2012-06-01

    Tau leptons play an important role in the physics program of the LHC. They are being used in electroweak measurements, in detector related studies and in searches for new phenomena like the Higgs boson or Supersymmetry. In the detector, tau leptons are reconstructed as collimated jets with low track multiplicity. Due to the background from QCD multijet processes, efficient tau identification techniques with large fake rejection are essential. Since single variable criteria are not enough to efficiently separate them from jets and electrons, modern multivariate techniques are used. In ATLAS, several advanced algorithms are applied to identify taus, including a projective likelihood estimator and boosted decision trees. All multivariate methods applied to the ATLAS simulated data perform better than the baseline cut analysis. Their performance is shown using high energy data collected at the ATLAS experiment. The improvement ranges from a factor of 2 to 5 in rejection for the same efficiency, depending on the selected efficiency operating point and the number of prongs in the tau decay. The strengths and weaknesses of each technique are also discussed.

  5. Modelling lifetime data with multivariate Tweedie distribution

    NASA Astrophysics Data System (ADS)

    Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah

    2017-05-01

    This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.

  6. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    NASA Astrophysics Data System (ADS)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  7. Multivariate Analog of Hays Omega-Squared.

    ERIC Educational Resources Information Center

    Sachdeva, Darshan

    The multivariate analog of Hays omega-squared for estimating the strength of the relationship in the multivariate analysis of variance has been proposed in this paper. The multivariate omega-squared is obtained through the use of Wilks' lambda test criterion. Application of multivariate omega-squared to a numerical example has been provided so as…

  8. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  9. Multivariable PID control by decoupling

    NASA Astrophysics Data System (ADS)

    Garrido, Juan; Vázquez, Francisco; Morilla, Fernando

    2016-04-01

    This paper presents a new methodology to design multivariable proportional-integral-derivative (PID) controllers based on decoupling control. The method is presented for general n × n processes. In the design procedure, an ideal decoupling control with integral action is designed to minimise interactions. It depends on the desired open-loop processes that are specified according to realisability conditions and desired closed-loop performance specifications. These realisability conditions are stated and three common cases to define the open-loop processes are studied and proposed. Then, controller elements are approximated to PID structure. From a practical point of view, the wind-up problem is also considered and a new anti-wind-up scheme for multivariable PID controller is proposed. Comparisons with other works demonstrate the effectiveness of the methodology through the use of several simulation examples and an experimental lab process.

  10. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  11. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  12. Method of Calibrating a Force Balance

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)

    2015-01-01

    A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.

  13. MSSC: Multi-Source Self-Calibration

    NASA Astrophysics Data System (ADS)

    Radcliffe, Jack F.

    2017-09-01

    Multi-Source Self-Calibration (MSSC) provides direction-dependent calibration to standard phase referencing. The code combines multiple faint sources detected within the primary beam to derive phase corrections. Each source has its CLEAN model divided into the visibilities which results in multiple point sources that are stacked in the uv plane to increase the S/N, thus permitting self-calibration. This process applies only to wide-field VLBI data sets that detect and image multiple sources within one epoch.

  14. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure

    PubMed Central

    Li, Yanming; Zhu, Ji

    2015-01-01

    Summary We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functioning groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. PMID:25732839

  15. Down force calibration stand test report

    SciTech Connect

    BOGER, R.M.

    1999-08-13

    The Down Force Calibration Stand was developed to provide an improved means of calibrating equipment used to apply, display and record Core Sample Truck (CST) down force. Originally, four springs were used in parallel to provide a system of resistance that allowed increasing force over increasing displacement. This spring system, though originally deemed adequate, was eventually found to be unstable laterally. For this reason, it was determined that a new method for resisting down force was needed.

  16. Clines Arc through Multivariate Morphospace.

    PubMed

    Lohman, Brian K; Berner, Daniel; Bolnick, Daniel I

    2017-04-01

    Evolutionary biologists typically represent clines as spatial gradients in a univariate character (or a principal-component axis) whose mean changes as a function of location along a transect spanning an environmental gradient or ecotone. This univariate approach may obscure the multivariate nature of phenotypic evolution across a landscape. Clines might instead be plotted as a series of vectors in multidimensional morphospace, connecting sequential geographic sites. We present a model showing that clines may trace nonlinear paths that arc through morphospace rather than elongating along a single major trajectory. Arcing clines arise because different characters diverge at different rates or locations along a geographic transect. We empirically confirm that some clines arc through morphospace, using morphological data from threespine stickleback sampled along eight independent transects from lakes down their respective outlet streams. In all eight clines, successive vectors of lake-stream divergence fluctuate in direction and magnitude in trait space, rather than pointing along a single phenotypic axis. Most clines exhibit surprisingly irregular directions of divergence as one moves downstream, although a few clines exhibit more directional arcs through morphospace. Our results highlight the multivariate complexity of clines that cannot be captured with the traditional graphical framework. We discuss hypotheses regarding the causes, and implications, of such arcing multivariate clines.

  17. Solar-Reflectance-Based Calibration of Spectral Radiometers

    NASA Technical Reports Server (NTRS)

    Cattrall, Christopher; Carder, Kendall L.; Thome, Kurtis J.; Gordon, Howard R.

    2001-01-01

    A method by which to calibrate a spectral radiometer using the sun as the illumination source is discussed. Solar-based calibrations eliminate several uncertainties associated with applying a lamp-based calibration to field measurements. The procedure requires only a calibrated reflectance panel, relatively low aerosol optical depth, and measurements of atmospheric transmittance. Further, a solar-reflectance-based calibration (SRBC), by eliminating the need for extraterrestrial irradiance spectra, reduces calibration uncertainty to approximately 2.2% across the solar-reflective spectrum, significantly reducing uncertainty in measurements used to deduce the optical properties of a system illuminated by the sun (e.g., sky radiance). The procedure is very suitable for on-site calibration of long-term field instruments, thereby reducing the logistics and costs associated with transporting a radiometer to a calibration facility.

  18. Meteorological Sensor Calibration Facility

    NASA Technical Reports Server (NTRS)

    Schmidlin, F. J.

    1988-01-01

    The meteorological sensor calibration facility is designed to test and assess radiosonde measurement quality through actual flights in the atmosphere. United States radiosonde temperature measurements are deficient in that they require correction for errors introduced by long- and short-wave radiation. The effect of not applying corrections results in a large bias between day time and night time measurements. This day/night bias has serious implications for users of radiosonde data, of which NASA is one. The derivation of corrections for the U.S. radiosonde is quite important. Determination of corrections depends on solving the heat transfer equation of the thermistor using laboratory measurements of the emissivity and absorptivity of the thermistor coating. The U.S. radiosonde observations from the World Meteorological Organization International Radiosonde Intercomparison were used as the data base to test whether the day/night height bias can be removed. Twenty-five noon time and 26 night time observations were used. Corrected temperatures were used to calculate new geopotentials. Day/night bias in the geopotentials decreased significantly when corrections were introduced. Some testing of thermal lag attendant with the standard carbon hygristor took place. Two radiosondes with small bead thermistors imbedded in the hygristor were flown. Detailed analysis was not accomplished; however, cursory examination of the data showed that the hygristor is at a higher temperature than the external thermistor indicates.

  19. Calibration facility safety plan

    NASA Technical Reports Server (NTRS)

    Fastie, W. G.

    1971-01-01

    A set of requirements is presented to insure the highest practical standard of safety for the Apollo 17 Calibration Facility in terms of identifying all critical or catastrophic type hazard areas. Plans for either counteracting or eliminating these areas are presented. All functional operations in calibrating the ultraviolet spectrometer and the testing of its components are described.

  20. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  1. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  2. Estimating uncertainty in multivariate responses to selection.

    PubMed

    Stinchcombe, John R; Simonsen, Anna K; Blows, Mark W

    2014-04-01

    Predicting the responses to natural selection is one of the key goals of evolutionary biology. Two of the challenges in fulfilling this goal have been the realization that many estimates of natural selection might be highly biased by environmentally induced covariances between traits and fitness, and that many estimated responses to selection do not incorporate or report uncertainty in the estimates. Here we describe the application of a framework that blends the merits of the Robertson-Price Identity approach and the multivariate breeder's equation to address these challenges. The approach allows genetic covariance matrices, selection differentials, selection gradients, and responses to selection to be estimated without environmentally induced bias, direct and indirect selection and responses to selection to be distinguished, and if implemented in a Bayesian-MCMC framework, statistically robust estimates of uncertainty on all of these parameters to be made. We illustrate our approach with a worked example of previously published data. More generally, we suggest that applying both the Robertson-Price Identity and the multivariate breeder's equation will facilitate hypothesis testing about natural selection, genetic constraints, and evolutionary responses.

  3. A multivariate Bayesian model for embryonic growth.

    PubMed

    Willemsen, Sten P; Eilers, Paul H C; Steegers-Theunissen, Régine P M; Lesaffre, Emmanuel

    2015-04-15

    Most longitudinal growth curve models evaluate the evolution of each of the anthropometric measurements separately. When applied to a 'reference population', this exercise leads to univariate reference curves against which new individuals can be evaluated. However, growth should be evaluated in totality, that is, by evaluating all body characteristics jointly. Recently, Cole et al. suggested the Superimposition by Translation and Rotation (SITAR) model, which expresses individual growth curves by three subject-specific parameters indicating their deviation from a flexible overall growth curve. This model allows the characterization of normal growth in a flexible though compact manner. In this paper, we generalize the SITAR model in a Bayesian way to multiple dimensions. The multivariate SITAR model allows us to create multivariate reference regions, which is advantageous for prediction. The usefulness of the model is illustrated on longitudinal measurements of embryonic growth obtained in the first semester of pregnancy, collected in the ongoing Rotterdam Predict study. Further, we demonstrate how the model can be used to find determinants of embryonic growth.

  4. A multivariate Baltic Sea environmental index.

    PubMed

    Dippner, Joachim W; Kornilovs, Georgs; Junker, Karin

    2012-11-01

    Since 2001/2002, the correlation between North Atlantic Oscillation index and biological variables in the North Sea and Baltic Sea fails, which might be addressed to a global climate regime shift. To understand inter-annual and inter-decadal variability in environmental variables, a new multivariate index for the Baltic Sea is developed and presented here. The multivariate Baltic Sea Environmental (BSE) index is defined as the 1st principal component score of four z-transformed time series: the Arctic Oscillation index, the salinity between 120 and 200 m in the Gotland Sea, the integrated river runoff of all rivers draining into the Baltic Sea, and the relative vorticity of geostrophic wind over the Baltic Sea area. A statistical downscaling technique has been applied to project different climate indices to the sea surface temperature in the Gotland, to the Landsort gauge, and the sea ice extent. The new BSE index shows a better performance than all other climate indices and is equivalent to the Chen index for physical properties. An application of the new index to zooplankton time series from the central Baltic Sea (Latvian EEZ) shows an excellent skill in potential predictability of environmental time series.

  5. Sandia WIPP calibration traceability

    SciTech Connect

    Schuhen, M.D.; Dean, T.A.

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  6. Topics in Multivariate Approximation Theory.

    DTIC Science & Technology

    1982-05-01

    of the Bramble -Hilbert lemma (see Bramble & H𔃻hert (13ŕ). Kergin’s scheme raises some questions. In .ontrast £.t its univar- iate antecedent, it...J. R. Rice (19791# An adaptive algorithm for multivariate approximation giving optimal convergence rates, J.Approx. Theory 25, 337-359. J. H. Bramble ...J.Numer.Anal. 7, 112-124. J. H. Bramble & S. R. Hilbert (19711, BoUnds for a class of linear functionals with applications to Hermite interpolation

  7. Software For Multivariate Bayesian Classification

    NASA Technical Reports Server (NTRS)

    Saul, Ronald; Laird, Philip; Shelton, Robert

    1996-01-01

    PHD general-purpose classifier computer program. Uses Bayesian methods to classify vectors of real numbers, based on combination of statistical techniques that include multivariate density estimation, Parzen density kernels, and EM (Expectation Maximization) algorithm. By means of simple graphical interface, user trains classifier to recognize two or more classes of data and then use it to identify new data. Written in ANSI C for Unix systems and optimized for online classification applications. Embedded in another program, or runs by itself using simple graphical-user-interface. Online help files makes program easy to use.

  8. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    PubMed

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R(2)) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  10. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  11. Univariate Analysis of Multivariate Outcomes in Educational Psychology.

    ERIC Educational Resources Information Center

    Hubble, L. M.

    1984-01-01

    The author examined the prevalence of multiple operational definitions of outcome constructs and an estimate of the incidence of Type I error rates when univariate procedures were applied to multiple variables in educational psychology. Multiple operational definitions of constructs were advocated and wider use of multivariate analysis was…

  12. Multivariate classification of infrared spectra of cell and tissue samples

    DOEpatents

    Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.

    1997-01-01

    Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.

  13. The Optimization of Multivariate Generalizability Studies with Budget Constraints.

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Goldstein, Zvi

    1992-01-01

    A method is presented for determining the optimal number of conditions to use in multivariate-multifacet generalizability designs when resource constraints are imposed. A decision maker can determine the number of observations needed to obtain the largest possible generalizability coefficient. The procedure easily applies to the univariate case.…

  14. Univariate Analysis of Multivariate Outcomes in Educational Psychology.

    ERIC Educational Resources Information Center

    Hubble, L. M.

    1984-01-01

    The author examined the prevalence of multiple operational definitions of outcome constructs and an estimate of the incidence of Type I error rates when univariate procedures were applied to multiple variables in educational psychology. Multiple operational definitions of constructs were advocated and wider use of multivariate analysis was…

  15. Multivariate-normality goodness-of-fit tests

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Crutcher, H. L.

    1977-01-01

    Computer program applies chi-square Pearson test to multivariate statistics for application in any field in which data of two or more variables (dimensions) are sampled for statistical purposes. Program handles dimensions two through five, with up to thousand data sets.

  16. Regionalization in geology by multivariate classification

    USGS Publications Warehouse

    Harff, Jan; Davis, J.C.

    1990-01-01

    The concept of multivariate classification of "geological objects" can be combined with the concept of regionalized variables to yield a procedure for typification of geological objects, such as rock units, well records, or samples. Numerical classification is followed by subdivision of the area of investigation, and culminates in a regionalization or mapping of the classification onto the plane. Regions are subdivisions of the map area which are spatially contiguous and relatively homogeneous in their geological properties. The probability of correct classification of each point within a region as being part of that region can be assessed in terms of Bayesian probability as a space-dependent function. The procedure is applied to subsurface data from western Kansas. The geologic properties used are quantitative variables, and relationships are expressed by Mahalanobis' distances. These functions could be replaced by other metrics if qualitative or binary data derived from geological descriptions or appraisals were included in the analysis. ?? 1990 International Association for Mathematical Geology.

  17. Design of feedforward controllers for multivariable plants

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    Simple methods for the design of feedforward controllers to achieve steady-state disturbance rejection and command tracking in stable multivariable plants are developed in this paper. The controllers are represented by simple and low-order transfer functions and are not based on reconstruction of the states of the commands and disturbances. For unstable plants, it is shown that the present method can be applied directly when an additional feedback controller is employed to stabilize the plant. The feedback and feedforward controllers do not affect each other and can be designed independently based on the open-loop plant to achieve stability, disturbance rejection and command tracking, respectivley. Numerical examples are given for illustration.

  18. Classification of adulterated honeys by multivariate analysis.

    PubMed

    Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad

    2017-06-01

    In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%).

  19. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1995-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  20. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1994-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  1. BXS Re-calibration

    SciTech Connect

    Welch, J; /SLAC

    2010-11-24

    indicated that the vacuum chamber was in fact in the proper position with respect to the magnet - not 19 mm off to one side - so the former possibility was discounted. Review of the Fiducial Report and an interview with Keith Caban convinced me that there was no error in the coordinate system used for magnet measurements. I went and interviewed Andrew Fischer who did the magnetic measurements of BXS. He had extensive records, including photographs of the setups and could quickly answer quite detailed questions about how the measurement was done. Before the interview, I had a suspicion there might have been a sign flip in the x coordinate which because of the wedge would result in the wrong path length and a miscalibration. Andrew was able to pin-point how this could have happened and later confirmed it by looking an measurement data from the BXG magnet done just after BXS and comparing photographs. It turned out that the sign of the horizontal stage travel that drives the measurement wire was opposite that of the x coordinate in the Traveler, and the sign difference wasn't applied to the data. The origin x = 0 was set up correctly, but the wire moved in the opposite direction to what was expected, just as if the arc had been flipped over about the origin. To quantitatively confirm that this was the cause of the observed difference in calibration I used the 'grid data', which was taken with a Hall probe on the BXS magnet originally to measure the FINT (focusing effect) term, and combined it with the Hall probe data taken on the flipped trajectory, and performed the field integral on a path that should give the same result as the design path. This is best illustrated in Figure 2. The integration path is coincident with the desired path from the pivot points (x = 0) outward. Between the pivot points the integration path is a mirror image of the design path, but because the magnet is fairly uniform, for this portion it gives the same result. Most of the calibration error

  2. Calibration of Cryogenic Thermometers for the Lhc

    NASA Astrophysics Data System (ADS)

    Balle, Ch.; Casas-Cubillos, J.; Vauthier, N.; Thermeau, J. P.

    2008-03-01

    6000 cryogenic temperature sensors of resistive type covering the range from room temperature down to 1.6 K are installed on the LHC machine. In order to meet the stringent requirements on temperature control of the superconducting magnets, each single sensor needs to be calibrated individually. In the framework of a special contribution, IPN (Institut de Physique Nucléaire) in Orsay, France built and operated a calibration facility with a throughput of 80 thermometers per week. After reception from the manufacturer, the thermometer is first assembled onto a support specific to the measurement environment, and then thermally cycled ten times and calibrated at least once from 1.6 to 300 K. The procedure for each of these interventions includes various measurements and the acquired data is recorded in an ORACLE®-database. Furthermore random calibrations on some samples are executed at CERN to crosscheck the coherence between the approximation data obtained by both IPN and CERN. In the range of 1.5 K to 30 K, the calibration apparatuses at IPN and CERN are traceable to standards maintained in a national metrological laboratory by using a set of rhodium-iron temperature sensors of metrological quality. This paper presents the calibration procedure, the quality assurance applied, the results of the calibration campaigns and the return of experience.

  3. Implicit and Explicit Spacecraft Gyro Calibration

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2004-01-01

    This paper presents a comparison between two approaches to sensor calibration. According to one approach, called explicit, an estimator compares the sensor readings to reference readings, and uses the difference between the two to estimate the calibration parameters. According to the other approach, called implicit, the sensor error is integrated to form a different entity, which is then compared with a reference quantity of this entity, and the calibration parameters are inferred from the difference. In particular this paper presents the comparison between these approaches when applied to in-flight spacecraft gyro calibration. Reference spacecraft rate is needed for gyro calibration when using the explicit approach; however, such reference rates are not readily available for in-flight calibration. Therefore the calibration parameter-estimator is expanded to include the estimation of that reference rate, which is based on attitude measurements in the form of attitude-quaternion. A comparison between the two approaches is made using simulated data. It is concluded that the performances of the two approaches are basically comparable. Sensitivity tests indicate that the explicit filter results are essentially insensitive to variations in given spacecraft dynamics model parameters.

  4. Tailored multivariate analysis for modulated enhanced diffraction

    SciTech Connect

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; Tutuncu, Goknur; Hanson, Jonathan C.

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scores and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.

  5. Tailored multivariate analysis for modulated enhanced diffraction

    SciTech Connect

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; Tutuncu, Goknur; Hanson, Jonathan C.

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scores and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.

  6. Tailored multivariate analysis for modulated enhanced diffraction

    DOE PAGES

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; ...

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less

  7. Revised landsat-5 thematic mapper radiometric calibration

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Barsi, J.A.

    2007-01-01

    Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed.

  8. The COS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hodge, Philip E.; Kaiser, M. E.; Keyes, C. D.; Ake, T. B.; Aloisi, A.; Friedman, S. D.; Oliveira, C. M.; Shaw, B.; Sahnow, D. J.; Penton, S. V.; Froning, C. S.; Beland, S.; Osterman, S.; Green, J.; COS/STIS STScI Team; IDT, COS

    2008-05-01

    The Cosmic Origins Spectrograph, COS, (Green, J, et al., 2000, Proc SPIE, 4013) will be installed in the Hubble Space Telescope (HST) during the next servicing mission. This will be the most sensitive ultraviolet spectrograph ever flown aboard HST. The program (CALCOS) for pipeline calibration of HST/COS data has been developed by the Space Telescope Science Institute. As with other HST pipelines, CALCOS uses an association table to list the data files to be included, and it employs header keywords to specify the calibration steps to be performed and the reference files to be used. COS includes both a cross delay line detector for the far ultraviolet (FUV) and a MAMA detector for the near ultraviolet (NUV). CALCOS uses a common structure for both channels, but the specific calibration steps differ. The calibration steps include pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. A 1-D spectrum will be extracted and flux calibrated. Data will normally be taken in TIME-TAG mode, recording the time and location of each detected photon, although ACCUM mode will also be supported. The wavelength calibration uses an on-board spectral line lamp. To enable precise wavelength calibration, default operations will simultaneously record the science target and lamp spectrum by executing brief (tag-flash) lamp exposures at least once per external target exposure.

  9. Rice Seed Cultivar Identification Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    PubMed Central

    Kong, Wenwen; Zhang, Chu; Liu, Fei; Nie, Pengcheng; He, Yong

    2013-01-01

    A near-infrared (NIR) hyperspectral imaging system was developed in this study. NIR hyperspectral imaging combined with multivariate data analysis was applied to identify rice seed cultivars. Spectral data was exacted from hyperspectral images. Along with Partial Least Squares Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogy (SIMCA), K-Nearest Neighbor Algorithm (KNN) and Support Vector Machine (SVM), a novel machine learning algorithm called Random Forest (RF) was applied in this study. Spectra from 1,039 nm to 1,612 nm were used as full spectra to build classification models. PLS-DA and KNN models obtained over 80% classification accuracy, and SIMCA, SVM and RF models obtained 100% classification accuracy in both the calibration and prediction set. Twelve optimal wavelengths were selected by weighted regression coefficients of the PLS-DA model. Based on optimal wavelengths, PLS-DA, KNN, SVM and RF models were built. All optimal wavelengths-based models (except PLS-DA) produced classification rates over 80%. The performances of full spectra-based models were better than optimal wavelengths-based models. The overall results indicated that hyperspectral imaging could be used for rice seed cultivar identification, and RF is an effective classification technique. PMID:23857260

  10. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  11. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    GRAFSTAT from IBM Research; I am grateful to Dr . Peter Welch for supplying GRAFSTAT. To P.A.W. Lewis, Thank you for your support, confidence and...34Multivariate Adaptive Regression Splines", Annals of Statistics, v. 19, no. 2, pp. 1-142, 1991. Geib , A., Applied Optimal Estimation, M.I.T. Press, Cambridge

  12. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  13. Jet energy calibration at the LHC

    DOE PAGES

    Schwartzman, Ariel

    2015-11-10

    In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiplemore » p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.« less

  14. Jet energy calibration at the LHC

    SciTech Connect

    Schwartzman, Ariel

    2015-11-10

    In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiple p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.

  15. DIRBE External Calibrator (DEC)

    NASA Technical Reports Server (NTRS)

    Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.

    1987-01-01

    Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.

  16. Airdata Measurement and Calibration

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.

    1995-01-01

    This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.

  17. Dynamic Pressure Calibration Standard

    NASA Technical Reports Server (NTRS)

    Schutte, P. C.; Cate, K. H.; Young, S. D.

    1986-01-01

    Vibrating columns of fluid used to calibrate transducers. Dynamic pressure calibration standard developed for calibrating flush diaphragm-mounted pressure transducers. Pressures up to 20 kPa (3 psi) accurately generated over frequency range of 50 to 1,800 Hz. System includes two conically shaped aluminum columns one 5 cm (2 in.) high for low pressures and another 11 cm (4.3 in.) high for higher pressures, each filled with viscous fluid. Each column mounted on armature of vibration exciter, which imparts sinusoidally varying acceleration to fluid column. Signal noise low, and waveform highly dependent on quality of drive signal in vibration exciter.

  18. Lidar Calibration Centre

    NASA Astrophysics Data System (ADS)

    Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe

    2016-06-01

    This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.

  19. Compact radiometric microwave calibrator

    SciTech Connect

    Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.

    2006-06-15

    The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency.

  20. PACS photometer calibration block analysis

    NASA Astrophysics Data System (ADS)

    Moór, A.; Müller, T. G.; Kiss, C.; Balog, Z.; Billot, N.; Marton, G.

    2014-07-01

    The absolute stability of the PACS bolometer response over the entire mission lifetime without applying any corrections is about 0.5 % (standard deviation) or about 8 % peak-to-peak. This fantastic stability allows us to calibrate all scientific measurements by a fixed and time-independent response file, without using any information from the PACS internal calibration sources. However, the analysis of calibration block observations revealed clear correlations of the internal source signals with the evaporator temperature and a signal drift during the first half hour after the cooler recycling. These effects are small, but can be seen in repeated measurements of standard stars. From our analysis we established corrections for both effects which push the stability of the PACS bolometer response to about 0.2 % (stdev) or 2 % in the blue, 3 % in the green and 5 % in the red channel (peak-to-peak). After both corrections we still see a correlation of the signals with PACS FPU temperatures, possibly caused by parasitic heat influences via the Kevlar wires which connect the bolometers with the PACS Focal Plane Unit. No aging effect or degradation of the photometric system during the mission lifetime has been found.

  1. Calibrating page sized Gafchromic EBT3 films

    SciTech Connect

    Crijns, W.; Maes, F.; Heide, U. A. van der; Van den Heuvel, F.

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittance values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal

  2. Multivariate image processing technique for noninvasive glucose sensing

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.; Cameron, Brent D.

    2010-02-01

    A potential noninvasive glucose sensing technique was investigated for application towards in vivo glucose monitoring for individuals afflicted with diabetes mellitus. Three dimensional ray tracing simulations using a realistic iris pattern integrated into an advanced human eye model are reported for physiological glucose concentrations ranging between 0 to 500 mg/dL. The anterior chamber of the human eye contains a clear fluid known as the aqueous humor. The optical refractive index of the aqueous humor varies on the order of 1.5x10-4 for a change in glucose concentration of 100 mg/dL. The simulation data was analyzed with a developed multivariate chemometrics procedure that utilizes iris-based images to form a calibration model. Results from these simulations show considerable potential for use of the developed method in the prediction of glucose. For further demonstration, an in vitro eye model was developed to validate the computer based modeling technique. In these experiments, a realistic iris pattern was placed in an analog eye model in which the glucose concentration within the fluid representing the aqueous humor was varied. A series of high resolution digital images were acquired using an optical imaging system. These images were then used to form an in vitro calibration model utilizing the same multivariate chemometric technique demonstrated in the 3-D optical simulations. In general, the developed method exhibits considerable applicability towards its use as an in vivo platform for the noninvasive monitoring of physiological glucose concentration.

  3. Calibration Fixture For Anemometer Probes

    NASA Technical Reports Server (NTRS)

    Lewis, Charles R.; Nagel, Robert T.

    1993-01-01

    Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.

  4. Calibration Fixture For Anemometer Probes

    NASA Technical Reports Server (NTRS)

    Lewis, Charles R.; Nagel, Robert T.

    1993-01-01

    Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.

  5. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  6. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  7. On the individual calibration of hailpads

    NASA Astrophysics Data System (ADS)

    Palencia, Covadonga; Berthet, Claude; Massot, Marta; Castro, Amaya; Dessens, Jean; Fraile, Roberto

    2007-02-01

    This paper is a comparative study between the two most common hailpad calibration systems: one annual calibration of a whole consignment of material, and the individual calibration of each plate after a hailfall. Individual calibration attempts to minimize errors due to differences in sensitivity to the impact of hailstones between plates from the same consignment, or due to differences in the inking process before the actual measurement. The comparison was carried out using calibration data from the past few years in the hailpad network in south-western France, and data from an individual calibration process on material provided by the hailpad network in Lleida (Spain). The same type of material was used in the two cases. The results confirm that the error in measuring hailstone sizes is smaller in the case of an individual calibration of hailpads than when one single calibration process was carried out for a whole consignment. The former is approximately 80% of the latter. However, this error could have been higher if it had not been the same person carrying out the single calibration process and the measuring of the dents: it has been found that differences in the inking process may account for up to 20% of the error in the case of small hailstones. Calibration errors affecting other variables, e.g. energy or parameter λ of the exponential size distribution are generally higher (5% and 18%, respectively) than errors due to the spatial variability of the hailstones. However, the calibration method does not influence the maximum size, since the relative error attributed to the spatial variability is about 8 times the calibration error. In conclusion, if errors in determining energy or parameter λ are to be reduced to a minimum, it is highly advisable to be consistent in applying the measuring procedure (if possible with the same person carrying out the measurements all the time), and even to use individual calibration on each plate, always bearing in mind that

  8. Review of robust multivariate statistical methods in high dimension.

    PubMed

    Filzmoser, Peter; Todorov, Valentin

    2011-10-31

    General ideas of robust statistics, and specifically robust statistical methods for calibration and dimension reduction are discussed. The emphasis is on analyzing high-dimensional data. The discussed methods are applied using the packages chemometrics and rrcov of the statistical software environment R. It is demonstrated how the functions can be applied to real high-dimensional data from chemometrics, and how the results can be interpreted. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Multivariate optimization of capillary electrophoresis methods: a critical review.

    PubMed

    Orlandini, Serena; Gotti, Roberto; Furlanetto, Sandra

    2014-01-01

    In this article a review on the recent applications of multivariate techniques for optimization of electromigration methods, is presented. Papers published in the period from August 2007 to February 2013, have been taken into consideration. Upon a brief description of each of the involved CE operative modes, the characteristics of the chemometric strategies (type of design, factors and responses) applied to face a number of analytical challenges, are presented. Finally, a critical discussion, giving some practical advices and pointing out the most common issues involved in multivariate set-up of CE methods, is provided.

  10. Steady-state decoupling and design of linear multivariable systems

    NASA Technical Reports Server (NTRS)

    Thaler, G. J.

    1974-01-01

    A constructive criterion for decoupling the steady states of a linear time-invariant multivariable system is presented. This criterion consists of a set of inequalities which, when satisfied, will cause the steady states of a system to be decoupled. Stability analysis and a new design technique for such systems are given. A new and simple connection between single-loop and multivariable cases is found. These results are then applied to the compensation design for NASA STOL C-8A aircraft. Both steady-state decoupling and stability are justified through computer simulations.

  11. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  12. Multivariate analyses in microbial ecology

    PubMed Central

    Ramette, Alban

    2007-01-01

    Environmental microbiology is undergoing a dramatic revolution due to the increasing accumulation of biological information and contextual environmental parameters. This will not only enable a better identification of diversity patterns, but will also shed more light on the associated environmental conditions, spatial locations, and seasonal fluctuations, which could explain such patterns. Complex ecological questions may now be addressed using multivariate statistical analyses, which represent a vast potential of techniques that are still underexploited. Here, well-established exploratory and hypothesis-driven approaches are reviewed, so as to foster their addition to the microbial ecologist toolbox. Because such tools aim at reducing data set complexity, at identifying major patterns and putative causal factors, they will certainly find many applications in microbial ecology. PMID:17892477

  13. Roundness calibration standard

    DOEpatents

    Burrus, Brice M.

    1984-01-01

    A roundness calibration standard is provided with a first arc constituting the major portion of a circle and a second arc lying between the remainder of the circle and the chord extending between the ends of said first arc.

  14. SRAM Detector Calibration

    NASA Technical Reports Server (NTRS)

    Soli, G. A.; Blaes, B. R.; Beuhler, M. G.

    1994-01-01

    Custom proton sensitive SRAM chips are being flown on the BMDO Clementine missions and Space Technology Research Vehicle experiments. This paper describes the calibration procedure for the SRAM proton detectors and their response to the space environment.

  15. Calibrated Properties Model

    SciTech Connect

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  16. Meteorological radar calibration

    NASA Technical Reports Server (NTRS)

    Hodge, D. B.

    1978-01-01

    A meteorological radar calibration technique is developed. It is found that the integrated, range corrected, received power saturates under intense rain conditions in a manner analogous to that encountered for the radiometric path temperature. Furthermore, it is found that this saturation condition establishes a bound which may be used to determine an absolution radar calibration for the case of radars operating at attenuating wavelengths. In the case of less intense rainfall or for radars at nonattenuating wavelengths, the relationship for direct calibration in terms of an independent measurement of radiometric path temperature is developed. This approach offers the advantage that the calibration is in terms of an independent measurement of the rainfall through the same elevated region as that viewed by the radar.

  17. Traceable Pyrgeometer Calibrations

    SciTech Connect

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig

    2016-05-02

    This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.

  18. Scanner calibration revisited

    PubMed Central

    2010-01-01

    Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides. PMID:20594322

  19. Scanner calibration revisited.

    PubMed

    Pozhitkov, Alexander E

    2010-07-01

    Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  20. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    SciTech Connect

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.; Yueh, Fang-Yu; Singh, Jagdish P.

    2011-09-07

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using the leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.

  1. A multivariate heuristic model for fuzzy time-series forecasting.

    PubMed

    Huarng, Kun-Huang; Yu, Tiffany Hui-Kuang; Hsu, Yu Wei

    2007-08-01

    Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed. In addition, many such models have been shown to provide better forecasting results than their conventional counterparts. However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models. Such a multivariate heuristic function can easily be extended and integrated with various univariate models. Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.

  2. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  3. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the

  4. Multivariate Global Ocean Assimilation Studies

    DTIC Science & Technology

    2002-09-30

    representations of the four dimensional circulation of the ocean using data assimilation methods . We intend these representations to be applied in a variety of military, academic, as well as commercial applications.

  5. Camera Calibration Accuracy at Different Uav Flying Heights

    NASA Astrophysics Data System (ADS)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  6. Distance measure with improved lower bound for multivariate time series

    NASA Astrophysics Data System (ADS)

    Li, Hailin

    2017-02-01

    Lower bound function is one of the important techniques used to fast search and index time series data. Multivariate time series has two aspects of high dimensionality including the time-based dimension and the variable-based dimension. Due to the influence of variable-based dimension, a novel method is proposed to deal with the lower bound distance computation for multivariate time series. The proposed method like the traditional ones also reduces the dimensionality of time series in its first step and thus does not directly apply the lower bound function on the multivariate time series. The dimensionality reduction is that multivariate time series is reduced to univariate time series denoted as center sequences according to the principle of piecewise aggregate approximation. In addition, an extended lower bound function is designed to obtain good tightness and fast measure the distance between any two center sequences. The experimental results demonstrate that the proposed lower bound function has better tightness and improves the performance of similarity search in multivariate time series datasets.

  7. Multicomponent seismic noise attenuation with multivariate order statistic filters

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yun; Wang, Xiaokai; Xun, Chao

    2016-10-01

    The vector relationship between multicomponent seismic data is highly important for multicomponent processing and interpretation, but this vector relationship could be damaged when each component is processed individually. To overcome the drawback of standard component-by-component filtering, multivariate order statistic filters are introduced and extended to attenuate the noise of multicomponent seismic data by treating such dataset as a vector wavefield rather than a set of scalar fields. According to the characteristics of seismic signals, we implement this type of multivariate filtering along local events. First, the optimal local events are recognized according to the similarity between the vector signals which are windowed from neighbouring seismic traces with a sliding time window along each trial trajectory. An efficient strategy is used to reduce the computational cost of similarity measurement for vector signals. Next, one vector sample each from the neighbouring traces are extracted along the optimal local event as the input data for a multivariate filter. Different multivariate filters are optimal for different noise. The multichannel modified trimmed mean (MTM) filter, as one of the multivariate order statistic filters, is applied to synthetic and field multicomponent seismic data to test its performance for attenuating white Gaussian noise. The results indicate that the multichannel MTM filter can attenuate noise while preserving the relative amplitude information of multicomponent seismic data more effectively than a single-channel filter.

  8. Atmospheric visibility estimation and image contrast calibration

    NASA Astrophysics Data System (ADS)

    Hermansson, Patrik; Edstam, Klas

    2016-10-01

    A method, referred to as contrast calibration, has been developed for transforming digital color photos of outdoor scenes from the atmospheric conditions, illumination and visibility, prevailing at the time of capturing the image to a corresponding image for other atmospheric conditions. A photo captured on a hazy day can, for instance, be converted to resemble a photo of the same scene for good visibility conditions. Converting digital color images to specified lightning and transmission conditions is useful for image based assessment of signature suppression solutions. The method uses "calibration objects" which are photographed at about the same time as the scene of interest. The calibration objects, which (indirectly) provide information on visibility and lightning conditions, consist of two flat boards, painted in different grayscale colors, and a commercial, neutral gray, reference card. Atmospheric extinction coefficient and sky intensity can be determined, in three wavelength bands, from image pixel values on the calibration objects and using this information the image can be converted to other atmospheric conditions. The image is transformed in contrast and color. For illustration, contrast calibration is applied to sample images of a scene acquired at different times. It is shown that contrast calibration of the images to the same reference values of extinction coefficient and sky intensity results in images that are more alike than the original images. It is also exemplified how images can be transformed to various other atmospheric weather conditions. Limitations of the method are discussed and possibilities for further development are suggested.

  9. Calibration and Validation for VIIRS Ocean Products

    NASA Astrophysics Data System (ADS)

    Arnone, R.; Davis, C.; May, D.

    2008-12-01

    Satellite data products for ocean color and SST both require precise calibration and validation to meet the continuity of present satellite ocean products. Here we outline the proposed plan for calibration and validation of VIIRS ocean data. The primary ocean color Environmental Data Records (EDRs) are Remote Sensing Reflectances (RSRs); the other EDRs such as chlorophyll are derived from the RSRs. RSRs are derived from the VIIRS Sensor Data Records (SDR) by applying an atmospheric correction that removes the gas absorptions and Rayleigh, aerosol and sea-surface reflectances. Ocean color products require highly accurate calibration and refinement of the sensor calibration using highly accurate in-situ measurements of RSRs (vicarious calibration). Similarly, the SST EDR is strongly dependent on accurate "tuning" algorithm coefficients based on large ocean match-up data sets of buoy and skin temperatures. Ocean products require both a short term and long term monitoring of the sensor "calibration" in order to provide real time ocean products for Navy and NOAA operations. Validation of EDR ocean products requires characterizing the product uncertainty based on match up ocean data from various water and atmospheric types, spanning seasonal and latitudinal variability. Product validation includes matchups with AERONET SeaPRISM above water RSRs combined with in-situ measurements of optical properties, chlorophyll, SST (bulk and skin), and other products. Ocean product validation plans are exploring using an automated network of ocean data for assessing algorithm stability and product uncertainty in order to meet the present need for real-time operational products.

  10. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd

  11. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    SciTech Connect

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  12. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  13. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  14. Polarimetric Palsar Calibration

    NASA Astrophysics Data System (ADS)

    Touzi, R.; Shimada, M.

    2008-11-01

    Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the cross-polarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.

  15. GTC Photometric Calibration

    NASA Astrophysics Data System (ADS)

    di Cesare, M. A.; Hammersley, P. L.; Rodriguez Espinosa, J. M.

    2006-06-01

    We are currently developing the calibration programme for GTC using techniques similar to the ones use for the space telescope calibration (Hammersley et al. 1998, A&AS, 128, 207; Cohen et al. 1999, AJ, 117, 1864). We are planning to produce a catalogue with calibration stars which are suitable for a 10-m telescope. These sources will be not variable, non binary and do not have infrared excesses if they are to be used in the infrared. The GTC science instruments require photometric calibration between 0.35 and 2.5 microns. The instruments are: OSIRIS (Optical System for Imaging low Resolution Integrated Spectroscopy), ELMER and EMIR (Espectrógrafo Multiobjeto Infrarrojo) and the Acquisition and Guiding boxes (Di Césare, Hammersley, & Rodriguez Espinosa 2005, RevMexAA Ser. Conf., 24, 231). The catalogue will consist of 30 star fields distributed in all of North Hemisphere. We will use fields containing sources over the range 12 to 22 magnitude, and spanning a wide range of spectral types (A to M) for the visible and near infrared. In the poster we will show the method used for selecting these fields and we will present the analysis of the data on the first calibration fields observed.

  16. Calculations for Calibration of a Mass Spectrometer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2008-01-01

    A computer program performs calculations to calibrate a quadrupole mass spectrometer in an instrumentation system for identifying trace amounts of organic chemicals in air. In the operation of the mass spectrometer, the mass-to-charge ratio (m/z) of ions being counted at a given instant of time is a function of the instantaneous value of a repeating ramp voltage waveform applied to electrodes. The count rate as a function of time can be converted to an m/z spectrum (equivalent to a mass spectrum for singly charged ions), provided that a calibration of m/z is available. The present computer program can perform the calibration in either or both of two ways: (1) Following a data-based approach, it can utilize the count-rate peaks and the times thereof measured when fed with air containing known organic compounds. (2) It can utilize a theoretical proportionality between the instantaneous m/z and the instantaneous value of an oscillating applied voltage. The program can also estimate the error of the calibration performed by the data-based approach. If calibrations are performed in both ways, then the results can be compared to obtain further estimates of errors.

  17. A preliminary investigation of the dynamic force-calibration of a magnetic suspension and balance system

    NASA Technical Reports Server (NTRS)

    Goodyer, M. J.

    1985-01-01

    The aerodynamic forces and moments acting upon a magnetically suspended wind tunnel model are derived from calibrations of suspension electro magnet currents against known forces. As an alternative to the conventional calibration method of applying steady forces to the model, early experiences with dynamic calibration are outlined, that is a calibration obtained by oscillating a model in suspension and deriving a force/current relationship from its inertia force and the unsteady components of currents. Advantages of dynamic calibration are speed and simplicity. The two methods of calibration applied to one force component show good agreement.

  18. A Review of Sensor Calibration Monitoring for Calibration Interval Extension in Nuclear Power Plants

    SciTech Connect

    Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara

    2012-08-31

    Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. Online monitoring can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. International application of calibration monitoring, such as at the Sizewell B plant in United Kingdom, has shown that sensors may operate for eight years, or longer, within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This report presents a state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and online monitoring algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several needs are identified, including the quantification of uncertainty in online calibration assessment; accurate determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity. Understanding the degradation of sensors and the impact of this degradation on signals is key to

  19. Designing a calibration set in spectral space for efficient development of an NIR method for tablet analysis.

    PubMed

    Alam, Md Anik; Drennen, James; Anderson, Carl

    2017-10-25

    Designing a calibration set is the first step in developing a multivariate spectroscopic calibration method for quantitative analysis of pharmaceutical tablets. This step is critical because successful model development depends on the suitability of the calibration data. For spectroscopic-based methods, traditional concentration based techniques for designing calibration sets are prone to have redundant information while simultaneously lacking necessary information for a successful calibration model. A method for designing a calibration set in spectral space was developed. The pure component spectra of a tablet formulation were used to define the spectral space of that formulation. This method maximizes the information content of measurements and minimizes sample requirements to provide an efficient means for developing multivariate spectroscopic calibration. A comparative study was conducted between a commonly employed full factorial approach to calibration development and the newly developed technique. The comparison was based on a system to quantify a model drug, acetaminophen, in pharmaceutical compacts using near infrared spectroscopy. A 2-factor full factorial design (acetaminophen with 5 levels and MCC:Lactose with 3 levels) was used for calibration development. Three replicates at each design point resulted in a total of 45 tablets for the calibration set. Using the newly developed spectral based method, 11 tablets were prepared for the calibration set. Partial least square (PLS) models were developed from respective calibration sets. Model performance was comprehensively assessed based on the ability to predict acetaminophen concentrations in multiple prediction sets. One prediction set contained similar information to calibration set while the other prediction sets contained different information from calibration set in order to assess the model accuracy and robustness. Similar prediction performance was achieved using the 11-tablet design (spectral space

  20. Calibration Of Airborne Visible/IR Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Vane, G. A.; Chrien, T. G.; Miller, E. A.; Reimer, J. H.

    1990-01-01

    Paper describes laboratory spectral and radiometric calibration of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) applied to all AVIRIS science data collected in 1987. Describes instrumentation and procedures used and demonstrates that calibration accuracy achieved exceeds design requirements. Developed for use in remote-sensing studies in such disciplines as botany, geology, hydrology, and oceanography.

  1. Calibration Of Airborne Visible/IR Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Vane, G. A.; Chrien, T. G.; Miller, E. A.; Reimer, J. H.

    1990-01-01

    Paper describes laboratory spectral and radiometric calibration of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) applied to all AVIRIS science data collected in 1987. Describes instrumentation and procedures used and demonstrates that calibration accuracy achieved exceeds design requirements. Developed for use in remote-sensing studies in such disciplines as botany, geology, hydrology, and oceanography.

  2. Calibration and Temperature Profile of a Tungsten Filament Lamp

    ERIC Educational Resources Information Center

    de Izarra, Charles; Gitton, Jean-Michel

    2010-01-01

    The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…

  3. Calibration and Temperature Profile of a Tungsten Filament Lamp

    ERIC Educational Resources Information Center

    de Izarra, Charles; Gitton, Jean-Michel

    2010-01-01

    The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…

  4. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.

  5. Calibrating the genome.

    PubMed

    Markward, Nathan J; Fisher, William P

    2004-01-01

    This project demonstrates how to calibrate different samples and scales of genomic information to a common scale of genomic measurement. 1,113 persons were genotyped at the 13 Combined DNA Index System (CODIS) short tandem repeat (STR) marker loci used by the Federal Bureau of Investigation (FBI) for human identity testing. A measurement model of form ln[(P(nik))/(1-P(nik))] = B(n)-D(i)-L(k) is used to construct person measures and locus calibrations from information contained in the CODIS database. Winsteps (Wright and Linacre, 2003) is employed to maximize initial estimates and to investigate the necessity and sufficiency of different rating classification schema. Model fit is satisfactory in all analyses. Study outcomes are found in Tables 1-6. Additive, divisible, and interchangeable measures and calibrations can be created from raw genomic information that transcend sample- and scale-dependencies associated with racial and ethnic descent, chromosomal location, and locus-specific allele expansion structures.

  6. Calibration Systems Final Report

    SciTech Connect

    Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.

    2006-02-01

    The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.

  7. BATSE spectroscopy detector calibration

    NASA Technical Reports Server (NTRS)

    Band, D.; Ford, L.; Matteson, J.; Lestrade, J. P.; Teegarden, B.; Schaefer, B.; Cline, T.; Briggs, M.; Paciesas, W.; Pendleton, G.

    1992-01-01

    We describe the channel-to-energy calibration of the Spectroscopy Detectors of the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (GRO). These detectors consist of NaI(TI) crystals viewed by photomultiplier tubes whose output in turn is measured by a pulse height analyzer. The calibration of these detectors has been complicated by frequent gain changes and by nonlinearities specific to the BATSE detectors. Nonlinearities in the light output from the NaI crystal and in the pulse height analyzer are shifted relative to each other by changes in the gain of the photomultiplier tube. We present the analytical model which is the basis of our calibration methodology, and outline how the empirical coefficients in this approach were determined. We also describe the complications peculiar to the Spectroscopy Detectors, and how our understanding of the detectors' operation led us to a solution to these problems.

  8. TA489A calibrator: SANDUS

    SciTech Connect

    LeBlanc, R.

    1987-08-01

    The TA489A Calibrator, designed to operate in the MA164 Digital Data Acquisition System, is used to calibrate up to 128 analog-to-digital recording channels. The TA489A calibrates using a dc Voltage Source or any of several special calibration modes. Calibration schemes are stored in the TA489A memory and are initiated locally or remotely through a Command Link.

  9. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  10. Minerva Detector Calibration

    NASA Astrophysics Data System (ADS)

    Rakotondravohitra, Laza

    2013-04-01

    Current and future neutrino oscillation experiments depend on precise knowledge of neutrino-nucleus cross-sections. Minerva is a neutrino scattering experiment at Fermilab. Minerva was designed to make precision measurements of low energy neutrino and antineutrino cross sections on a variety of different materials (plastic scintillator, C, Fe, Pb, He and H2O). In Order to make these measurements, it is crucial that the detector is carefully calibrated.This talk will describe how MINERvA uses muons from upstream neutrino interactions as a calibration source to convert electronics output to absolute energy deposition.

  11. Calibrated entanglement entropy

    NASA Astrophysics Data System (ADS)

    Bakhmatov, I.; Deger, N. S.; Gutowski, J.; Colgáin, E. Ó.; Yavartanoo, H.

    2017-07-01

    The Ryu-Takayanagi prescription reduces the problem of calculating entanglement entropy in CFTs to the determination of minimal surfaces in a dual anti-de Sitter geometry. For 3D gravity theories and BTZ black holes, we identify the minimal surfaces as special Lagrangian cycles calibrated by the real part of the holomorphic one-form of a spacelike hypersurface. We show that (generalised) calibrations provide a unified way to determine holographic entanglement entropy from minimal surfaces, which is applicable to warped AdS3 geometries. We briefly discuss generalisations to higher dimensions.

  12. Multivariate pluvial flood damage models

    SciTech Connect

    Van Ootegem, Luc; Verhofstadt, Elsy; Van Herck, Kristine; Creten, Tom

    2015-09-15

    Depth–damage-functions, relating the monetary flood damage to the depth of the inundation, are commonly used in the case of fluvial floods (floods caused by a river overflowing). We construct four multivariate damage models for pluvial floods (caused by extreme rainfall) by differentiating on the one hand between ground floor floods and basement floods and on the other hand between damage to residential buildings and damage to housing contents. We do not only take into account the effect of flood-depth on damage, but also incorporate the effects of non-hazard indicators (building characteristics, behavioural indicators and socio-economic variables). By using a Tobit-estimation technique on identified victims of pluvial floods in Flanders (Belgium), we take into account the effect of cases of reported zero damage. Our results show that the flood depth is an important predictor of damage, but with a diverging impact between ground floor floods and basement floods. Also non-hazard indicators are important. For example being aware of the risk just before the water enters the building reduces content damage considerably, underlining the importance of warning systems and policy in this case of pluvial floods. - Highlights: • Prediction of damage of pluvial floods using also non-hazard information • We include ‘no damage cases’ using a Tobit model. • The damage of flood depth is stronger for ground floor than for basement floods. • Non-hazard indicators are especially important for content damage. • Potential gain of policies that increase awareness of flood risks.

  13. Internal and External Validation of a multivariable Model to Define Hospital-Acquired Pneumonia After Esophagectomy.

    PubMed

    Weijs, Teus J; Seesing, Maarten F J; van Rossum, Peter S N; Koëter, Marijn; van der Sluis, Pieter C; Luyer, Misha D P; Ruurda, Jelle P; Nieuwenhuijzen, Grard A P; van Hillegersberg, Richard

    2016-04-01

    Pneumonia is an important complication following esophagectomy; however, a wide range of pneumonia incidence is reported. The lack of one generally accepted definition prevents valid inter-study comparisons. We aimed to simplify and validate an existing scoring model to define pneumonia following esophagectomy. The Utrecht Pneumonia Score, comprising of pulmonary radiography findings, leucocyte count, and temperature, was simplified and internally validated using bootstrapping in the dataset (n = 185) in which it was developed. Subsequently, the intercept and (shrunk) coefficients of the developed multivariable logistic regression model were applied to an external dataset (n = 201) RESULTS: In the revised Uniform Pneumonia Score, points are assigned based on the temperature, the leucocyte, and the findings of pulmonary radiography. The model discrimination was excellent in the internal validation set and in the external validation set (C-statistics 0.93 and 0.91, respectively); furthermore, the model calibrated well in both cohorts. The revised Uniform Pneumonia Score (rUPS) can serve as a means to define post-esophagectomy pneumonia. Utilization of a uniform definition for pneumonia will improve inter-study comparability and improve the evaluations of new therapeutic strategies to reduce the pneumonia incidence.

  14. Multivariate curve resolution of spectrophotometric data for the determination of artificial food colors.

    PubMed

    Lachenmeier, Dirk W; Kessler, Waltraud

    2008-07-23

    In the analysis of food additives, past emphasis was put on the development of chromatographic techniques to separate target components from a complex matrix. Especially in the case of artificial food colors, direct spectrophotometric measurement was seen to lack in specificity due to a high spectral overlap between different components. Multivariate curve resolution (MCR) may be used to overcome this limitation. MCR is able to (i) extract from a complex spectral feature the number of involved components, (ii) attribute the resulting spectra to chemical compounds, and (iii) quantify the individual spectral contributions with or without a priori knowledge. We have evaluated MCR for the routine analysis of yellow and blue food colors in absinthe spirits. Using calibration standards, we were able to show that MCR equally performs as compared to partial least-squares regression but with much improved chemical information contained in the predicted spectra. MCR was then applied to an authentic collective of different absinthes. As confirmed by reference analytics, the food colors were correctly assigned with a sensitivity of 0.93 and a specificity of 0.85. Besides the artificial colors, the algorithm detected a further component in some samples that could be assigned to natural coloring from chlorophyll.

  15. In-Space Calibration of a Gyro Quadruplet

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2001-01-01

    This work presents a new approach to gyro calibration where, in addition to being used for computing attitude that is needed in the calibration process, the gyro outputs are also used as measurements in a Kalman filter. This work also presents an algorithm for calibrating a quadruplet rather than the customary triad gyro set. In particular, a new misalignment error model is derived for this case. The new calibration algorithm is applied to the EOS-AQUA satellite gyros. The effectiveness of the new algorithm is demonstrated through simulations.

  16. Syringe calibration factors for the NPL Secondary Standard Radionuclide Calibrator for selected medical radionuclides.

    PubMed

    Tyler, D K; Woods, M J

    2003-01-01

    Before a radiopharmaceutical is administered to a patient, its activity needs to be accurately assayed. This is normally done via a radionuclide calibrator, using a glass vial as the calibration device. The radionuclide is then transferred to a syringe and it is now becoming common practice to re-measure the syringe and use this value as the activity administered to the patient. Due to elemental composition and geometrical differences, etc. between the glass vial and the syringe, the calibration factors are different for the two containers and this can lead to an incorrect activity being given to the patient unless a correction is applied for these differences. To reduce the uncertainty on syringe measurements, syringe calibration factors and volume correction factors for the NPL Secondary Standard Radionuclide Calibrator have been derived by NPL for several medically important radionuclides. It was found that the differences between the calibration factors for the syringes and glass vials depend on the energies of the photon emissions from the decay of the radionuclides; the lower the energy, the greater the difference. As expected, large differences were observed for 125I (70%) and only small differences for 131I. However, for radionuclides such as 99mTc and 67Ga, differences of up to 30% have been observed. This work has shown the need for the use of specifically derived syringe calibration factors as well as highlighting the complexity of the problem with regard to syringe types, procurement, etc.

  17. Application of Optimal Designs to Item Calibration

    PubMed Central

    Lu, Hung-Yi

    2014-01-01

    In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318

  18. A nonlinearized multivariate dominant factor-based partial least squares (PLS) model for coal analysis by using laser-induced breakdown spectroscopy.

    PubMed

    Feng, Jie; Wang, Zhe; Li, Lizhi; Li, Zheng; Ni, Weidou

    2013-03-01

    A nonlinearized multivariate dominant factor-based partial least-squares (PLS) model was applied to coal elemental concentration measurement. For C concentration determination in bituminous coal, the intensities of multiple characteristic lines of the main elements in coal were applied to construct a comprehensive dominant factor that would provide main concentration results. A secondary PLS thereafter applied would further correct the model results by using the entire spectral information. In the dominant factor extraction, nonlinear transformation of line intensities (based on physical mechanisms) was embedded in the linear PLS to describe nonlinear self-absorption and inter-element interference more effectively and accurately. According to the empirical expression of self-absorption and Taylor expansion, nonlinear transformations of atomic and ionic line intensities of C were utilized to model self-absorption. Then, the line intensities of other elements, O and N, were taken into account for inter-element interference, considering the possible recombination of C with O and N particles. The specialty of coal analysis by using laser-induced breakdown spectroscopy (LIBS) was also discussed and considered in the multivariate dominant factor construction. The proposed model achieved a much better prediction performance than conventional PLS. Compared with our previous, already improved dominant factor-based PLS model, the present PLS model obtained the same calibration quality while decreasing the root mean square error of prediction (RMSEP) from 4.47 to 3.77%. Furthermore, with the leave-one-out cross-validation and L-curve methods, which avoid the overfitting issue in determining the number of principal components instead of minimum RMSEP criteria, the present PLS model also showed better performance for different splits of calibration and prediction samples, proving the robustness of the present PLS model.

  19. Multivariate Adaptive Regression Splines (Preprint)

    DTIC Science & Technology

    1990-08-01

    situations, but as with the previous examples, the variance of the ratio (GCV/PSE) dominates this small bias. . 4.5. Portuguese Olive Oil . For this...example MARS is applied to data from analytical chemistry. The observations consist of 417 samples of olive oil from Portugal (Forina, et al., 1983). On...extent to which olive oil from northeastern Portugal (Dour0 Valley - 90 samples) differed from that of the rest of Portugal (327 samples). One way to

  20. ODERACS preflight optical calibration

    NASA Astrophysics Data System (ADS)

    Madler, Ronald A.; Culp, Robert D.; Maclay, Timothy D.

    1993-09-01

    Detection and measurement of small space debris objects are vital to verify the validity of debris models for the low Earth orbit (LEO) environment. Calibration of optical instruments is necessary so that reliable estimates of the size and albedo of man-made orbiting objects can be found. The Orbital Debris Radar Calibration Spheres (ODERACS) project is being conducted by NASA and the DoD to calibrate both radar and optical tracking facilities for small objects. This paper discusses the pre-flight optical calibration of the spheres. The purpose of this study is to determine the spectral reflectivity, scattering characteristics and albedo for the visible wavelength region. The measurements are performed by illuminating the flight spheres with a collimated beam of light, and measuring the reflected visible light over possible phase angles. This allows one to estimate the specular and scattering characteristics as well as the albedo. Tests were conducted on several flight and test metal spheres with varying diameters and surface characteristics. The polished metal spheres are shown to be very good specular reflectors, while the diffuse surfaces exhibit both specular and scattering reflection characteristics.

  1. Improved Regression Calibration

    ERIC Educational Resources Information Center

    Skrondal, Anders; Kuha, Jouni

    2012-01-01

    The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

  2. Optical detector calibrator system

    NASA Technical Reports Server (NTRS)

    Strobel, James P. (Inventor); Moerk, John S. (Inventor); Youngquist, Robert C. (Inventor)

    1996-01-01

    An optical detector calibrator system simulates a source of optical radiation to which a detector to be calibrated is responsive. A light source selected to emit radiation in a range of wavelengths corresponding to the spectral signature of the source is disposed within a housing containing a microprocessor for controlling the light source and other system elements. An adjustable iris and a multiple aperture filter wheel are provided for controlling the intensity of radiation emitted from the housing by the light source to adjust the simulated distance between the light source and the detector to be calibrated. The geared iris has an aperture whose size is adjustable by means of a first stepper motor controlled by the microprocessor. The multiple aperture filter wheel contains neutral density filters of different attenuation levels which are selectively positioned in the path of the emitted radiation by a second stepper motor that is also controlled by the microprocessor. An operator can select a number of detector tests including range, maximum and minimum sensitivity, and basic functionality. During the range test, the geared iris and filter wheel are repeatedly adjusted by the microprocessor as necessary to simulate an incrementally increasing simulated source distance. A light source calibration subsystem is incorporated in the system which insures that the intensity of the light source is maintained at a constant level over time.

  3. Thermistor mount efficiency calibration

    SciTech Connect

    Cable, J.W.

    1980-05-01

    Thermistor mount efficiency calibration is accomplished by use of the power equation concept and by complex signal-ratio measurements. A comparison of thermistor mounts at microwave frequencies is made by mixing the reference and the reflected signals to produce a frequency at which the amplitude and phase difference may be readily measured.

  4. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  5. Uncertainty in audiometer calibration

    NASA Astrophysics Data System (ADS)

    Aurélio Pedroso, Marcos; Gerges, Samir N. Y.; Gonçalves, Armando A., Jr.

    2004-02-01

    The objective of this work is to present a metrology study necessary for the accreditation of audiometer calibration procedures at the National Brazilian Institute of Metrology Standardization and Industrial Quality—INMETRO. A model for the calculation of measurement uncertainty was developed. Metrological aspects relating to audiometer calibration, traceability and measurement uncertainty were quantified through comparison between results obtained at the Industrial Noise Laboratory—LARI of the Federal University of Santa Catarina—UFSC and the Laboratory of Electric/acoustics—LAETA of INMETRO. Similar metrological performance of the measurement system used in both laboratories was obtained, indicating that the interlaboratory results are compatible with the expected values. The uncertainty calculation was based on the documents: EA-4/02 Expression of the Uncertainty of Measurement in Calibration (European Co-operation for Accreditation 1999 EA-4/02 p 79) and Guide to the Expression of Uncertainty in Measurement (International Organization for Standardization 1993 1st edn, corrected and reprinted in 1995, Geneva, Switzerland). Some sources of uncertainty were calculated theoretically (uncertainty type B) and other sources were measured experimentally (uncertainty type A). The global value of uncertainty calculated for the sound pressure levels (SPLs) is similar to that given by other calibration institutions. The results of uncertainty related to measurements of SPL were compared with the maximum uncertainties Umax given in the standard IEC 60645-1: 2001 (International Electrotechnical Commission 2001 IEC 60645-1 Electroacoustics—Audiological Equipment—Part 1:—Pure-Tone Audiometers).

  6. TWSTFT Link Calibration Report

    DTIC Science & Technology

    2015-09-01

    Serrano, G. Brunetti (2013) Relative Calibration of the Time Transfer Link between CERN and LNGS for Precise Neutrino Time of Flight Measurements. Proc...Esteban, M. Pallavicini, Va. Pettiti, C. Plantard, A. Razeto (2012) Measurement of CNGS Muon Neutrinos Speed with Borexino: INRIM and ROA Contribution

  7. Computerized tomography calibrator

    NASA Technical Reports Server (NTRS)

    Engel, Herbert P. (Inventor)

    1991-01-01

    A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.

  8. NVLAP calibration laboratory program

    SciTech Connect

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  9. Calibrating Communication Competencies

    NASA Astrophysics Data System (ADS)

    Surges Tatum, Donna

    2016-11-01

    The Many-faceted Rasch measurement model is used in the creation of a diagnostic instrument by which communication competencies can be calibrated, the severity of observers/raters can be determined, the ability of speakers measured, and comparisons made between various groups.

  10. [Study on the absolute spectral irradiation calibration method for far ultraviolet spectrometer in remote sensing].

    PubMed

    Yu, Lei; Lin, Guan-Yu; Chen, Bin

    2013-01-01

    The present paper studied spectral irradiation responsivities calibration method which can be applied to the far ultraviolet spectrometer for upper atmosphere remote sensing. It is difficult to realize the calibration for far ultraviolet spectrometer for many reasons. Standard instruments for far ultraviolet waveband calibration are few, the degree of the vacuum experiment system is required to be high, the stabilities of the experiment are hardly maintained, and the limitation of the far ultraviolet waveband makes traditional diffuser and the integrating sphere radiance calibration method difficult to be used. To solve these problems, a new absolute spectral irradiance calibration method was studied, which can be applied to the far ultraviolet calibration. We build a corresponding special vacuum experiment system to verify the calibration method. The light source system consists of a calibrated deuterium lamp, a vacuum ultraviolet monochromater and a collimating system. We used the calibrated detector to obtain the irradiance responsivities of it. The three instruments compose the calibration irradiance source. We used the "calibration irradiance source" to illuminate the spectrometer prototype and obtained the spectral irradiance responsivities. It realized the absolute spectral irradiance calibration for the far ultraviolet spectrometer utilizing the calibrated detector. The absolute uncertainty of the calibration is 7.7%. The method is significant for the ground irradiation calibration of the far ultraviolet spectrometer in upper atmosphere remote sensing.

  11. Pleiades Absolute Calibration : Inflight Calibration Sites and Methodology

    NASA Astrophysics Data System (ADS)

    Lachérade, S.; Fourest, S.; Gamet, P.; Lebègue, L.

    2012-07-01

    In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station) and Oceans (Calibration over molecular scattering) or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  12. Simplified Vicarious Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Ryan, Robert; Holekamp, Kara; Pagnutti, Mary

    2010-01-01

    A measurement-based radiance estimation approach for vicarious radiometric calibration of spaceborne multispectral remote sensing systems has been developed. This simplified process eliminates the use of radiative transfer codes and reduces the number of atmospheric assumptions required to perform sensor calibrations. Like prior approaches, the simplified method involves the collection of ground truth data coincident with the overpass of the remote sensing system being calibrated, but this approach differs from the prior techniques in both the nature of the data collected and the manner in which the data are processed. In traditional vicarious radiometric calibration, ground truth data are gathered using ground-viewing spectroradiometers and one or more sun photometer( s), among other instruments, located at a ground target area. The measured data from the ground-based instruments are used in radiative transfer models to estimate the top-of-atmosphere (TOA) target radiances at the time of satellite overpass. These TOA radiances are compared with the satellite sensor readings to radiometrically calibrate the sensor. Traditional vicarious radiometric calibration methods require that an atmospheric model be defined such that the ground-based observations of solar transmission and diffuse-to-global ratios are in close agreement with the radiative transfer code estimation of these parameters. This process is labor-intensive and complex, and can be prone to errors. The errors can be compounded because of approximations in the model and inaccurate assumptions about the radiative coupling between the atmosphere and the terrain. The errors can increase the uncertainty of the TOA radiance estimates used to perform the radiometric calibration. In comparison, the simplified approach does not use atmospheric radiative transfer models and involves fewer assumptions concerning the radiative transfer properties of the atmosphere. This new technique uses two neighboring uniform

  13. Aircraft measurement of electric field - Self-calibration

    NASA Technical Reports Server (NTRS)

    Winn, W. P.

    1993-01-01

    Aircraft measurement of electric fields is difficult as the electrically conducting surface of the aircraft distorts the electric field. Calibration requires determining the relations between the undistorted electric field in the absence of the vehicle and the signals from electric field meters that sense the local distorted fields in their immediate vicinity. This paper describes a generalization of a calibration method which uses pitch and roll maneuvers. The technique determines both the calibration coefficients and the direction of the electric vector. The calibration of individual electric field meters and the elimination of the aircraft's self-charge are described. Linear combinations of field mill signals are examined and absolute calibration and error analysis are discussed. The calibration method was applied to data obtained during a flight near thunderstorms.

  14. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  15. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  16. Apparatus and system for multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2003-06-24

    An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.

  17. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.

  18. MULTIVARIATE VARYING COEFFICIENT MODEL FOR FUNCTIONAL RESPONSES

    PubMed Central

    Zhu, Hongtu; Li, Runze; Kong, Linglong

    2012-01-01

    Motivated by recent work studying massive imaging data in the neuroimaging literature, we propose multivariate varying coefficient models (MVCM) for modeling the relation between multiple functional responses and a set of covariates. We develop several statistical inference procedures for MVCM and systematically study their theoretical properties. We first establish the weak convergence of the local linear estimate of coefficient functions, as well as its asymptotic bias and variance, and then we derive asymptotic bias and mean integrated squared error of smoothed individual functions and their uniform convergence rate. We establish the uniform convergence rate of the estimated covariance function of the individual functions and its associated eigenvalue and eigenfunctions. We propose a global test for linear hypotheses of varying coefficient functions, and derive its asymptotic distribution under the null hypothesis. We also propose a simultaneous confidence band for each individual effect curve. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply MVCM to investigate the development of white matter diffusivities along the genu tract of the corpus callosum in a clinical study of neurodevelopment. PMID:23645942

  19. MULTIVARIATE VARYING COEFFICIENT MODEL FOR FUNCTIONAL RESPONSES

    PubMed Central

    Zhu, Hongtu; Li, Runze; Kong, Linglong

    2012-01-01

    Motivated by recent work studying massive imaging data in the neuroimaging literature, we propose multivariate varying coefficient models (MVCM) for modeling the relation between multiple functional responses and a set of covariates. We develop several statistical inference procedures for MVCM and systematically study their theoretical properties. We first establish the weak convergence of the local linear estimate of coefficient functions, as well as its asymptotic bias and variance, and then we derive asymptotic bias and mean integrated squared error of smoothed individual functions and their uniform convergence rate. We establish the uniform convergence rate of the estimated covariance function of the individual functions and its associated eigenvalue and eigenfunctions. We propose a global test for linear hypotheses of varying coefficient functions, and derive its asymptotic distribution under the null hypothesis. We also propose a simultaneous confidence band for each individual effect curve. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply MVCM to investigate the development of white matter diffusivities along the genu tract of the corpus callosum in a clinical study of neurodevelopment. PMID:12926711

  20. Multivariate semiparametric spatial methods for imaging data.

    PubMed

    Chen, Huaihou; Cao, Guanqun; Cohen, Ronald A

    2017-04-01

    Univariate semiparametric methods are often used in modeling nonlinear age trajectories for imaging data, which may result in efficiency loss and lower power for identifying important age-related effects that exist in the data. As observed in multiple neuroimaging studies, age trajectories show similar nonlinear patterns for the left and right corresponding regions and for the different parts of a big organ such as the corpus callosum. To incorporate the spatial similarity information without assuming spatial smoothness, we propose a multivariate semiparametric regression model with a spatial similarity penalty, which constrains the variation of the age trajectories among similar regions. The proposed method is applicable to both cross-sectional and longitudinal region-level imaging data. We show the asymptotic rates for the bias and covariance functions of the proposed estimator and its asymptotic normality. Our simulation studies demonstrate that by borrowing information from similar regions, the proposed spatial similarity method improves the efficiency remarkably. We apply the proposed method to two neuroimaging data examples. The results reveal that accounting for the spatial similarity leads to more accurate estimators and better functional clustering results for visualizing brain atrophy pattern.Functional clustering; Longitudinal magnetic resonance imaging (MRI); Penalized B-splines; Region of interest (ROI); Spatial penalty.

  1. MULTIVARIATE VARYING COEFFICIENT MODEL FOR FUNCTIONAL RESPONSES.

    PubMed

    Zhu, Hongtu; Li, Runze; Kong, Linglong

    2012-10-01

    Motivated by recent work studying massive imaging data in the neuroimaging literature, we propose multivariate varying coefficient models (MVCM) for modeling the relation between multiple functional responses and a set of covariates. We develop several statistical inference procedures for MVCM and systematically study their theoretical properties. We first establish the weak convergence of the local linear estimate of coefficient functions, as well as its asymptotic bias and variance, and then we derive asymptotic bias and mean integrated squared error of smoothed individual functions and their uniform convergence rate. We establish the uniform convergence rate of the estimated covariance function of the individual functions and its associated eigenvalue and eigenfunctions. We propose a global test for linear hypotheses of varying coefficient functions, and derive its asymptotic distribution under the null hypothesis. We also propose a simultaneous confidence band for each individual effect curve. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply MVCM to investigate the development of white matter diffusivities along the genu tract of the corpus callosum in a clinical study of neurodevelopment.

  2. Multivariate sensitivity to voice during auditory categorization

    PubMed Central

    Peelle, Jonathan E.; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-01-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. PMID:26245316

  3. Calibrating ultrasonic test equipment for checking thin metal strip stock

    NASA Technical Reports Server (NTRS)

    Peterson, R. M.

    1967-01-01

    Calibration technique detects minute laminar-type discontinuities in thin metal strip stock. Patterns of plastic tape are preselected to include minutely calculated discontinuities and the tape is applied to the strip stock to intercept the incident sonic beam.

  4. 40 CFR 1065.920 - PEMS calibrations and verifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... apply the measurement allowance during calibration using good engineering judgment. If the measurement... results passes the 95% confidence alternate-procedure statistics for field testing (t-test and F-test...

  5. 40 CFR 1065.920 - PEMS calibrations and verifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... apply the measurement allowance during calibration using good engineering judgment. If the measurement... results passes the 95% confidence alternate-procedure statistics for field testing (t-test and F-test...

  6. Mercury CEM Calibration

    SciTech Connect

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  7. Signal inference with unknown response: Calibration-uncertainty renormalized estimator

    NASA Astrophysics Data System (ADS)

    Dorn, Sebastian; Enßlin, Torsten A.; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  8. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  9. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  10. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  11. Dilution standard addition calibration: A practical calibration strategy for multiresidue organic compounds determination.

    PubMed

    Martins, Manoel L; Rizzetti, Tiele M; Kemmerich, Magali; Saibt, Nathália; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato

    2016-08-19

    Among calibration approaches for organic compounds determination in complex matrices, external calibration, based in solutions of the analytes in solvent or in blank matrix extracts, is the most applied approach. Although matrix matched calibration (MMC) can compensates the matrix effects, it does not compensate low recovery results. In this way, standard addition (SA) and procedural standard calibration (PSC) are usual alternatives, despite they generate more sample and/or matrix blanks consumption need, extra sample preparations and higher time and costs. Thus, the goal of this work was to establish a fast and efficient calibration approach, the diluted standard addition calibration (DSAC), based on successive dilutions of a spiked blank sample. In order to evaluate the proposed approach, solvent calibration (SC), MMC, PSC and DSAC were applied to evaluate recovery results of grape blank samples spiked with 66 pesticides. Samples were extracted with the acetate QuEChERS method and the compounds determined by ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Results indicated that low recovery results for some pesticides were compensated by both PSC and DSAC approaches. Considering recoveries from 70 to 120% with RSD <20% as adequate, DSAC presented 83%, 98% and 100% of compounds meeting this criteria for the spiking levels 10, 50 and 100μgkg(-1), respectively. PSC presented same results (83%, 98% and 100%), better than those obtained by MMC (79%, 95% and 97%) and by SC (62%, 70% and 79%). The DSAC strategy showed to be suitable for calibration of multiresidue determination methods, producing adequate results in terms of trueness and is easier and faster to perform than other approaches. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Inertial Sensor Error Reduction through Calibration and Sensor Fusion.

    PubMed

    Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L

    2016-02-17

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking.

  13. Inertial Sensor Error Reduction through Calibration and Sensor Fusion

    PubMed Central

    Lambrecht, Stefan; Nogueira, Samuel L.; Bortole, Magdo; Siqueira, Adriano A. G.; Terra, Marco H.; Rocon, Eduardo; Pons, José L.

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  14. Calibration of a detector for nonlinear responses.

    PubMed

    Asnin, Leonid; Guiochon, Georges

    2005-09-30

    A calibration curve is often needed to derive from the record of the detector signal the actual concentration profile of the eluate in many studies of the thermodynamic and kinetic of adsorption by chromatography. The calibration task is complicated in the frequent cases in which the detector response is nonlinear. The simplest approach consists in preparing a series of solutions of known concentrations, in flushing them successively through the detector cell, and in recording the height of the plateau response obtained. However, this method requires relatively large amounts of the pure solutes studied. These are not always available, may be most costly, and could be applied to better uses. An alternative procedure consists of deriving this calibration curve from a series of peaks recorded upon the injection of increasingly large pulses of the studied compound. We validated this new method in HPLC with a UV detector. Questions concerning the reproducibility and accuracy of the method are discussed.

  15. Field calibration of volcanic surveillance cameras

    NASA Astrophysics Data System (ADS)

    Ospina, C. A.; Narvaez, A.; Corchuelo, I. D.

    2017-06-01

    In volcanic surveillance, cameras are largely used allowing amazing images of volcanic eruptions as well as beautiful views of these grand Earth constructions. The Colombian Geological Service through the Volcanological and Seismological Observatory of Popayán (OVSPo) have 10 surveillance cameras looking at three volcanoes present in Provinces of Cauca, Huila and Tolima. However, these cameras were not calibrated previously, which has limited the analysis and exploitation of the information up to now. The development of this work take into account that the calibration process should not change camera parameters like orientation and position and what’s more, we consider the access difficulties to reach and stay at the camera stations (volcanic environment). A calibration methodology was developed and applied on three (3) cameras on field, achieving to improve the analysis and exploitation of information within images of volcanic surveillance cameras.

  16. Mardia's Multivariate Kurtosis with Missing Data

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Lambert, Paul L.; Fouladi, Rachel T.

    2004-01-01

    Mardia's measure of multivariate kurtosis has been implemented in many statistical packages commonly used by social scientists. It provides important information on whether a commonly used multivariate procedure is appropriate for inference. Many statistical packages also have options for missing data. However, there is no procedure for applying…

  17. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  18. Gravitational-wave detection using multivariate analysis

    NASA Astrophysics Data System (ADS)

    Adams, Thomas S.; Meacher, Duncan; Clark, James; Sutton, Patrick J.; Jones, Gareth; Minot, Ariana

    2013-09-01

    Searches for gravitational-wave bursts (transient signals, typically of unknown waveform) require identification of weak signals in background detector noise. The sensitivity of such searches is often critically limited by non-Gaussian noise fluctuations that are difficult to distinguish from real signals, posing a key problem for transient gravitational-wave astronomy. Current noise rejection tests are based on the analysis of a relatively small number of measured properties of the candidate signal, typically correlations between detectors. Multivariate analysis (MVA) techniques probe the full space of measured properties of events in an attempt to maximize the power to accurately classify events as signal or background. This is done by taking samples of known background events and (simulated) signal events to train the MVA classifier, which can then be applied to classify events of unknown type. We apply the boosted decision tree (BDT) MVA technique to the problem of detecting gravitational-wave bursts associated with gamma-ray bursts. We find that BDTs are able to increase the sensitive distance reach of the search by as much as 50%, corresponding to a factor of ˜3 increase in sensitive volume. This improvement is robust against trigger sky position, large sky localization error, poor data quality, and the simulated signal waveforms that are used. Critically, we find that the BDT analysis is able to detect signals that have different morphologies from those used in the classifier training and that this improvement extends to false alarm probabilities beyond the 3σ significance level. These findings indicate that MVA techniques may be used for the robust detection of gravitational-wave bursts with a priori unknown waveform.

  19. Variable Acceleration Force Calibration System (VACS)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.; Johnson, Thomas H.; Landman, Drew

    2014-01-01

    Conventionally, force balances have been calibrated manually, using a complex system of free hanging precision weights, bell cranks, and/or other mechanical components. Conventional methods may provide sufficient accuracy in some instances, but are often quite complex and labor-intensive, requiring three to four man-weeks to complete each full calibration. To ensure accuracy, gravity-based loading is typically utilized. However, this often causes difficulty when applying loads in three simultaneous, orthogonal axes. A complex system of levers, cranks, and cables must be used, introducing increased sources of systematic error, and significantly increasing the time and labor intensity required to complete the calibration. One aspect of the VACS is a method wherein the mass utilized for calibration is held constant, and the acceleration is changed to thereby generate relatively large forces with relatively small test masses. Multiple forces can be applied to a force balance without changing the test mass, and dynamic forces can be applied by rotation or oscillating acceleration. If rotational motion is utilized, a mass is rigidly attached to a force balance, and the mass is exposed to a rotational field. A large force can be applied by utilizing a large rotational velocity. A centrifuge or rotating table can be used to create the rotational field, and fixtures can be utilized to position the force balance. The acceleration may also be linear. For example, a table that moves linearly and accelerates in a sinusoidal manner may also be utilized. The test mass does not have to move in a path that is parallel to the ground, and no re-leveling is therefore required. Balance deflection corrections may be applied passively by monitoring the orientation of the force balance with a three-axis accelerometer package. Deflections are measured during each test run, and adjustments with respect to the true applied load can be made during the post-processing stage. This paper will

  20. Mercury Calibration System

    SciTech Connect

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on

  1. Basics of Multivariate Analysis in Neuroimaging Data

    PubMed Central

    Habeck, Christian Georg

    2010-01-01

    Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic

  2. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  3. Estimating the decomposition of predictive information in multivariate systems

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  4. Synergy, redundancy, and multivariate information measures: an experimentalist's perspective.

    PubMed

    Timme, Nicholas; Alford, Wesley; Flecker, Benjamin; Beggs, John M

    2014-04-01

    Information theory has long been used to quantify interactions between two variables. With the rise of complex systems research, multivariate information measures have been increasingly used to investigate interactions between groups of three or more variables, often with an emphasis on so called synergistic and redundant interactions. While bivariate information measures are commonly agreed upon, the multivariate information measures in use today have been developed by many different groups, and differ in subtle, yet significant ways. Here, we will review these multivariate information measures with special emphasis paid to their relationship to synergy and redundancy, as well as examine the differences between these measures by applying them to several simple model systems. In addition to these systems, we will illustrate the usefulness of the information measures by analyzing neural spiking data from a dissociated culture through early stages of its development. Our aim is that this work will aid other researchers as they seek the best multivariate information measure for their specific research goals and system. Finally, we have made software available online which allows the user to calculate all of the information measures discussed within this paper.

  5. Estimating the decomposition of predictive information in multivariate systems.

    PubMed

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  6. MIRO Calibration Switch Mechanism

    NASA Technical Reports Server (NTRS)

    Suchman, Jason; Salinas, Yuki; Kubo, Holly

    2001-01-01

    The Jet Propulsion Laboratory has designed, analyzed, built, and tested a calibration switch mechanism for the MIRO instrument on the ROSETTA spacecraft. MIRO is the Microwave Instrument for the Rosetta Orbiter; this instrument hopes to investigate the origin of the solar system by studying the origin of comets. Specifically, the instrument will be the first to use submillimeter and millimeter wave heterodyne receivers to remotely examine the P-54 Wirtanen comet. In order to calibrate the instrument, it needs to view a hot and cold target. The purpose of the mechanism is to divert the instrument's field of view from the hot target, to the cold target, and then back into space. This cycle is to be repeated every 30 minutes for the duration of the 1.5 year mission. The paper describes the development of the mechanism, as well as analysis and testing techniques.

  7. HIRDLS monochromator calibration equipment

    NASA Astrophysics Data System (ADS)

    Hepplewhite, Christopher L.; Barnett, John J.; Djotni, Karim; Whitney, John G.; Bracken, Justain N.; Wolfenden, Roger; Row, Frederick; Palmer, Christopher W. P.; Watkins, Robert E. J.; Knight, Rodney J.; Gray, Peter F.; Hammond, Geoffory

    2003-11-01

    A specially designed and built monochromator was developed for the spectral calibration of the HIRDLS instrument. The High Resolution Dynamics Limb Sounder (HIRDLS) is a precision infra-red remote sensing instrument with very tight requirements on the knowledge of the response to received radiation. A high performance, vacuum compatible monochromator, was developed with a wavelength range from 4 to 20 microns to encompass that of the HIRDLS instrument. The monochromator is integrated into a collimating system which is shared with a set of tiny broad band sources used for independent spatial response measurements (reported elsewhere). This paper describes the design and implementation of the monochromator and the performance obtained during the period of calibration of the HIRDLS instrument at Oxford University in 2002.

  8. Calibrated vapor generator source

    DOEpatents

    Davies, J.P.; Larson, R.A.; Goodrich, L.D.; Hall, H.J.; Stoddard, B.D.; Davis, S.G.; Kaser, T.G.; Conrad, F.J.

    1995-09-26

    A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet. 10 figs.

  9. Calibrated vapor generator source

    DOEpatents

    Davies, John P.; Larson, Ronald A.; Goodrich, Lorenzo D.; Hall, Harold J.; Stoddard, Billy D.; Davis, Sean G.; Kaser, Timothy G.; Conrad, Frank J.

    1995-01-01

    A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet.

  10. Phase calibration generator

    NASA Technical Reports Server (NTRS)

    Sigman, E. H.

    1988-01-01

    A phase calibration system was developed for the Deep Space Stations to generate reference microwave comb tones which are mixed in with signals received by the antenna. These reference tones are used to remove drifts of the station's receiving system from the detected data. This phase calibration system includes a cable stabilizer which transfers a 20 MHz reference signal from the control room to the antenna cone. The cable stabilizer compensates for delay changes in the long cable which connects its control room subassembly to its antenna cone subassembly in such a way that the 20 MHz is transferred to the cone with no significant degradation of the hydrogen maser atomic clock stability. The 20 MHz reference is used by the comb generator and is also available for use as a reference for receiver LO's in the cone.

  11. Pipeline Calibration for STIS

    NASA Astrophysics Data System (ADS)

    Hodge, P. E.; Hulbert, S. J.; Lindler, D.; Busko, I.; Hsu, J.-C.; Baum, S.; McGrath, M.; Goudfrooij, P.; Shaw, R.; Katsanis, R.; Keener, S.; Bohlin, R.

    The CALSTIS program for calibration of Space Telescope Imaging Spectrograph data in the OPUS pipeline differs in several significant ways from calibration for earlier HST instruments, such as the use of FITS format, computation of error estimates, and association of related exposures. Several steps are now done in the pipeline that previously had to be done off-line by the user, such as cosmic ray rejection and extraction of 1-D spectra. Although the program is linked with IRAF for image and table I/O, it is written in ANSI C rather than SPP, which should make the code more accessible. FITS extension I/O makes use of the new IRAF FITS kernel for images and the HEASARC FITSIO package for tables.

  12. Efficient multivariate linear mixed model algorithms for genome-wide association studies.

    PubMed

    Zhou, Xiang; Stephens, Matthew

    2014-04-01

    Multivariate linear mixed models (mvLMMs) are powerful tools for testing associations between single-nucleotide polymorphisms and multiple correlated phenotypes while controlling for population stratification in genome-wide association studies. We present efficient algorithms in the genome-wide efficient mixed model association (GEMMA) software for fitting mvLMMs and computing likelihood ratio tests. These algorithms offer improved computation speed, power and P-value calibration over existing methods, and can deal with more than two phenotypes.

  13. Calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lorefice, Salvatore; Malengo, Andrea

    2006-10-01

    After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.

  14. Mesoscale hybrid calibration artifact

    DOEpatents

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  15. Calibration Chamber Testing

    DTIC Science & Technology

    1992-01-30

    penetrometers of different designs, (iii) the effect of rod friction, (iv) the effect of discontinuous operation, and (v) sensing an interface between two sand...layers. Other test results on two designs of 10 cm2 Fugro penetrometers, each with a different position of friction sleeve, assisted in the selection...at different stages in the penetration of a specimen. The calibration tests had the prime purpose of establishing correlations between the penetration

  16. SOFIE instrument ground calibration

    NASA Astrophysics Data System (ADS)

    Hansen, Scott; Fish, Chad; Romrell, Devin; Gordley, Larry; Hervig, Mark

    2006-08-01

    Space Dynamics Laboratory (SDL), in partnership with GATS, Inc., designed and built an instrument to conduct the Solar Occultation for Ice Experiment (SOFIE). SOFIE is the primary infrared sensor in the NASA Aeronomy of Ice in the Mesosphere (AIM) instrument suite. AIM's mission is to study polar mesospheric clouds (PMCs). SOFIE will make measurements in 16 separate spectral bands, arranged in eight pairs between 0.29 and 5.3 μm. Each band pair will provide differential absorption limb-path transmission profiles for an atmospheric component of interest, by observing the sun through the limb of the atmosphere during solar occultation as AIM orbits Earth. A pointing mirror and imaging sun sensor coaligned with the detectors are used to track the sun during occultation events and maintain stable alignment of the sun on the detectors. Ground calibration experiments were performed to measure SOFIE end-to-end relative spectral response, nonlinearity, and spatial characteristics. SDL's multifunction infrared calibrator #1 (MIC1) was used to present sources to the instrument for calibration. Relative spectral response (RSR) measurements were performed using a step-scan Fourier transform spectrometer (FTS). Out-of-band RSR was measured to approximately 0.01% of in-band peak response using the cascaded filter Fourier transform spectrometer (CFFTS) method. Linearity calibration was performed using a calcium fluoride attenuator in combination with a 3000K blackbody. Spatial characterization was accomplished using a point source and the MIC1 pointing mirror. SOFIE sun sensor tracking algorithms were verified using a heliostat and relay mirrors to observe the sun from the ground. These techniques are described in detail, and resulting SOFIE performance parameters are presented.

  17. Calibrated Properties Model

    SciTech Connect

    H. H. Liu

    2003-02-14

    This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.

  18. A force calibration standard for magnetic tweezers

    NASA Astrophysics Data System (ADS)

    Yu, Zhongbo; Dulin, David; Cnossen, Jelmer; Köber, Mariana; van Oene, Maarten M.; Ordu, Orkide; Berghuis, Bojk A.; Hensgens, Toivo; Lipfert, Jan; Dekker, Nynke H.

    2014-12-01

    To study the behavior of biological macromolecules and enzymatic reactions under force, advances in single-molecule force spectroscopy have proven instrumental. Magnetic tweezers form one of the most powerful of these techniques, due to their overall simplicity, non-invasive character, potential for high throughput measurements, and large force range. Drawbacks of magnetic tweezers, however, are that accurate determination of the applied forces can be challenging for short biomolecules at high forces and very time-consuming for long tethers at low forces below ˜1 piconewton. Here, we address these drawbacks by presenting a calibration standard for magnetic tweezers consisting of measured forces for four magnet configurations. Each such configuration is calibrated for two commonly employed commercially available magnetic microspheres. We calculate forces in both time and spectral domains by analyzing bead fluctuations. The resulting calibration curves, validated through the use of different algorithms that yield close agreement in their determination of the applied forces, span a range from 100 piconewtons down to tens of femtonewtons. These generalized force calibrations will serve as a convenient resource for magnetic tweezers users and diminish variations between different experimental configurations or laboratories.

  19. Fast calibration of gas flowmeters

    NASA Technical Reports Server (NTRS)

    Lisle, R. V.; Wilson, T. L.

    1981-01-01

    Digital unit automates calibration sequence using calculator IC and programmable read-only memory to solve calibration equations. Infrared sensors start and stop calibration sequence. Instrument calibrates mass flowmeters or rotameters where flow measurement is based on mass or volume. This automatic control reduces operator time by 80 percent. Solid-state components are very reliable, and digital character allows system accuracy to be determined primarily by accuracy of transducers.

  20. Multivariate normative comparisons using an aggregated database.

    PubMed

    Agelink van Rentergem, Joost A; Murre, Jaap M J; Huizenga, Hilde M

    2017-01-01

    In multivariate normative comparisons, a patient's profile of test scores is compared to those in a normative sample. Recently, it has been shown that these multivariate normative comparisons enhance the sensitivity of neuropsychological assessment. However, multivariate normative comparisons require multivariate normative data, which are often unavailable. In this paper, we show how a multivariate normative database can be constructed by combining healthy control group data from published neuropsychological studies. We show that three issues should be addressed to construct a multivariate normative database. First, the database may have a multilevel structure, with participants nested within studies. Second, not all tests are administered in every study, so many data may be missing. Third, a patient should be compared to controls of similar age, gender and educational background rather than to the entire normative sample. To address these issues, we propose a multilevel approach for multivariate normative comparisons that accounts for missing data and includes covariates for age, gender and educational background. Simulations show that this approach controls the number of false positives and has high sensitivity to detect genuine deviations from the norm. An empirical example is provided. Implications for other domains than neuropsychology are also discussed. To facilitate broader adoption of these methods, we provide code implementing the entire analysis in the open source software package R.