Science.gov

Sample records for multivariate calibration applied

  1. Multivariate calibration applied to the quantitative analysis of infrared spectra

    NASA Astrophysics Data System (ADS)

    Haaland, David M.

    1992-03-01

    Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in- situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mid- or near-infrared spectra of the blood. Progress toward the noninvasive determination of glucose levels in diabetics is an ultimate goal of this research.

  2. Multivariate calibration applied to the quantitative analysis of infrared spectra

    SciTech Connect

    Haaland, D.M.

    1991-01-01

    Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.

  3. Multivariate Regression with Calibration*

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-01-01

    We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861

  4. Primer on multivariate calibration

    SciTech Connect

    Thomas, E.V. )

    1994-08-01

    In analytical chemistry, calibration is the procedure that relates instrumental measurements to an analyte of interest. Typically, instrumental measurements are obtained from specimens in which the amount (or level) of the analyte has been determined by some independent and inherently accurate assay (e.g., wet chemistry). Together, the instrumental measurements and results from the independent assays are used to construct a model that relates the analyte level to the instrumental measurements. The advent of high-speed digital computers has greatly increased data acquisition and analysis capabilities and has provided the analytical chemist with opportunities to use many measurements - perhaps hundreds - for calibrating an instrument (e.g., absorbances at multiple wave-lengths). To take advantage of this technology, however, new methods (i.e., multivariate calibration methods) were needed for analyzing and modeling the experimental data. The purpose of this report is to introduce several evolving multivariate calibration methods and to present some important issues regarding their use. 30 refs., 7 figs.

  5. Multivariate regression analysis applied to the calibration of equipment used in pig meat classification in Romania.

    PubMed

    Savescu, Roxana Florenta; Laba, Marian

    2016-06-01

    This paper highlights the statistical methodology used in a dissection experiment carried out in Romania to calibrate and standardize two classification devices, OptiGrade PRO (OGP) and Fat-o-Meat'er (FOM). One hundred forty-five carcasses were measured using the two probes and dissected according to the European reference method. To derive prediction formulas for each device, multiple linear regression analysis was performed on the relationship between the reference lean meat percentage and the back fat and muscle thicknesses, using the ordinary least squares technique. The root mean squared error of prediction calculated using the leave-one-out cross validation met European Commission (EC) requirements. The application of the new prediction equations reduced the gap between the lean meat percentage measured with the OGP and FOM from 2.43% (average for the period Q3/2006-Q2/2008) to 0.10% (average for the period Q3/2008-Q4/2014), providing the basis for a fair payment system for the pig producers. PMID:26835835

  6. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  7. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. PMID:23727675

  8. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  9. Exploration of new multivariate spectral calibration algorithms.

    SciTech Connect

    Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.

    2004-03-01

    A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

  10. Different approaches to multivariate calibration of nonlinear sensor data.

    PubMed

    Dieterle, Frank; Busche, Stefan; Gauglitz, Günter

    2004-10-01

    In this study, different approaches to the multivariate calibration of the vapors of two refrigerants are reported. As the relationships between the time-resolved sensor signals and the concentrations of the analytes are nonlinear, the widely used partial least-squares regression (PLS) fails. Therefore, different methods are used, which are known to be able to deal with nonlinearities present in data. First, the Box-Cox transformation, which transforms the dependent variables nonlinearly, was applied. The second approach, the implicit nonlinear PLS regression, tries to account for nonlinearities by introducing squared terms of the independent variables to the original independent variables. The third approach, quadratic PLS (QPLS), uses a nonlinear quadratic inner relationship for the model instead of a linear relationship such as PLS. Tree algorithms are also used, which split a nonlinear problem into smaller subproblems, which are modeled using linear methods or discrete values. Finally, neural networks are applied, which are able to model any relationship. Different special implementations, like genetic algorithms with neural networks and growing neural networks, are also used to prevent an overfitting. Among the fast and simpler algorithms, QPLS shows good results. Different implementations of neural networks show excellent results. Among the different implementations, the most sophisticated and computing-intensive algorithms (growing neural networks) show the best results. Thus, the optimal method for the data set presented is a compromise between quality of calibration and complexity of the algorithm. PMID:15156303

  11. Quantitative infrared determination of composition and properties of borophosphosilicate glass (BPSG) thin films using multivariate calibration

    SciTech Connect

    McGuire, J.A.; Adhihetty, I.S.; Niemczyk, T.M. . Dept. of Chemistry); Haaland, D.M.; Taylor, D.F.; Blankenship, D.M. )

    1991-01-01

    Partial least squares multivariate calibration methods were applied to the infrared spectra of a new set of borophosphosilicate glass (BPSG) thin films on silicon wafers. The calibration samples were prepared by a low pressure chemical vapor deposition (LPCVD) process. The statistically designed calibration set included data from nearly 400 coated Si wafers. Calibrations were attempted for properties such as dopant concentrations, thickness, etch rate, film stress, and electrical parameters. It was found that annealed films were predicted more precisely than unannealed films. B, P, and thickness measurements yielded the most precise results by these techniques. Multivariate calibration methods applied to etch rate for annealed films and unannealed film stress provided some limited predictive ability. The detection and removal of outliers greatly improved the analysis precisions. Finally, within wafer and between wafer dopant uniformity may be responsible for degrading the precision of these analytical methods. 7 refs., 3 figs., 2 tabs.

  12. Transfer of multivariate calibration models between spectrometers: A progress report

    SciTech Connect

    Haaland, D.; Jones, H.; Rohrback, B.

    1994-12-31

    Multivariate calibration methods are extremely powerful for quantitative spectral analyses and have myriad uses in quality control and process monitoring. However, when analyses are to be completed at multiple sites or when spectrometers drift, recalibration is required. Often a full recalibration of an instrument can be impractical: the problem is particularly acute when the number of calibration standards is large or the standards chemically unstable. Furthermore, simply using Instrument A`s calibration model to predict unknowns on Instrument B can lead to enormous errors. Therefore, a mathematical procedure that would allow for the efficient transfer of a multivariate calibration model from one instrument to others using a small number of transfer standards is highly desirable. In this study, near-infrared spectral data have been collected from two sets of statistically designed round-robin samples on multiple FT-IR and grating spectrometers. One set of samples encompasses a series of dilute aqueous solutions of urea, creatinine, and NaCl while the second set is derived from mixtures of heptane, monochlorobenzene, and toluene. A systematic approach has been used to compare the results from four published transfer algorithms in order to determine parameters that affect the quality of the transfer for each class of sample and each type of spectrometer.

  13. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration.

    PubMed

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20-200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  14. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    PubMed Central

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  15. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

    SciTech Connect

    CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

    1999-10-01

    The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

  16. Variety identification of brown sugar using short-wave near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Yang, Haiqing; Wu, Di; He, Yong

    2007-11-01

    Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.

  17. Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration

    SciTech Connect

    Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.

    2009-09-01

    Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for the peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.

  18. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    SciTech Connect

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  19. Coping with matrix effects in headspace solid phase microextraction gas chromatography using multivariate calibration strategies.

    PubMed

    Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana

    2015-08-14

    SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. PMID:26166296

  20. The estimation of total petroleum hydrocarbons content in waste water by IR spectrometry with multivariate calibrations.

    PubMed

    Vershinin, Viacheslav I; Petrov, Sergey V

    2016-02-01

    Alkanes, cycloalkanes and arenes have rather different sensitivities to IR-spectrometric determination, leading to high relative uncertainty (δc) for the total petroleum hydrocarbon index (TPH) in natural and waste waters. Another source of TPH uncertainty is the mismatch of group composition of the hydrocarbon mixture in the sample and in the standard substance used for one-dimensional calibration. Increasing the number of wavelengths and using of multivariate calibrations permit the reduction of δc to <10% rel. These calibrations may be constructed from IR-spectra and findings of extracts from aqueous solutions with known content of hydrocarbons. The method takes into account the losses of hydrocarbons during sample preparation. The accuracy of TPH estimations for this method is much better than for standard methods based on one-dimensional calibration with Simard mixture. This new method is useful in produced waste water analysis. PMID:26653437

  1. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  2. Predicting coliform concentrations in upland impoundments: design and calibration of a multivariate model.

    PubMed Central

    Kay, D; McDonald, A

    1983-01-01

    This paper reports on the calibration and use of a multiple regression model designed to predict concentrations of Escherichia coli and total coliforms in two upland British impoundments. The multivariate approach has improved predictive capability over previous univariate linear models because it includes predictor variables for the timing and magnitude of hydrological input to the reservoirs and physiochemical parameters of water quality. The significance of these results for catchment management research is considered. PMID:6639016

  3. Improved Multivariate Calibration Models for Corn Stover Feedstock and Dilute-Acid Pretreated Corn Stover

    SciTech Connect

    Wolfrum, E. J.; Sluiter, A. D.

    2009-01-01

    We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.

  4. Enzymatic electrochemical detection coupled to multivariate calibration for the determination of phenolic compounds in environmental samples.

    PubMed

    Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel

    2013-03-15

    An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. PMID:23598144

  5. Simultaneous Determination of Metamizole, Thiamin and Pyridoxin Using UV-Spectroscopy in Combination with Multivariate Calibration

    PubMed Central

    Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul

    2015-01-01

    Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934

  6. Multivariate versus classical univariate calibration methods for spectrofluorimetric data: application to simultaneous determination of olmesartan medoxamil and amlodipine besylate in their combined dosage form.

    PubMed

    Darwish, Hany W; Backeit, Ahmed H

    2013-01-01

    Olmesartan medoxamil (OLM, an angiotensin II receptor blocker) and amlodipine besylate (AML, a dihydropyridine calcium channel blocker), are co-formulated in a single-dose combination for the treatment of hypertensive patients whose blood pressure is not adequately controlled on either component monotherapy. In this work, four multivariate and two univariate calibration methods were applied for simultaneous spectrofluorimetric determination of OLM and AML in their combined pharmaceutical tablets in all ratios approved by FDA. The four multivariate methods are partial least squares (PLS), genetic algorithm PLS (GA-PLS), principal component ANN (PC-ANN) and GA-ANN. The two proposed univariate calibration methods are, direct spectrofluorimetric method for OLM and isoabsorpitive method for determination of total concentration of OLM and AML and hence AML by subtraction. The results showed the superiority of multivariate calibration methods over univariate ones for the analysis of the binary mixture. The optimum assay conditions were established and the proposed multivariate calibration methods were successfully applied for the assay of the two drugs in validation set and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives. The results were favorably compared with those obtained by a reference spectrophotometric method. PMID:22895851

  7. Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration.

    PubMed

    Li, Hongdong; Liang, Yizeng; Xu, Qingsong; Cao, Dongsheng

    2009-08-19

    By employing the simple but effective principle 'survival of the fittest' on which Darwin's Evolution Theory is based, a novel strategy for selecting an optimal combination of key wavelengths of multi-component spectral data, named competitive adaptive reweighted sampling (CARS), is developed. Key wavelengths are defined as the wavelengths with large absolute coefficients in a multivariate linear regression model, such as partial least squares (PLS). In the present work, the absolute values of regression coefficients of PLS model are used as an index for evaluating the importance of each wavelength. Then, based on the importance level of each wavelength, CARS sequentially selects N subsets of wavelengths from N Monte Carlo (MC) sampling runs in an iterative and competitive manner. In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly selected to establish a calibration model. Next, based on the regression coefficients, a two-step procedure including exponentially decreasing function (EDF) based enforced wavelength selection and adaptive reweighted sampling (ARS) based competitive wavelength selection is adopted to select the key wavelengths. Finally, cross validation (CV) is applied to choose the subset with the lowest root mean square error of CV (RMSECV). The performance of the proposed procedure is evaluated using one simulated dataset together with one near infrared dataset of two properties. The results reveal an outstanding characteristic of CARS that it can usually locate an optimal combination of some key wavelengths which are interpretable to the chemical property of interest. Additionally, our study shows that better prediction is obtained by CARS when compared to full spectrum PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE) and moving window partial least squares regression (MWPLSR). PMID:19616692

  8. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available. PMID:26497008

  9. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.

    PubMed

    Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J

    2016-03-15

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies. PMID:26771246

  10. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.

    2016-03-01

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.

  11. A linear semi-infinite programming strategy for constructing optimal wavelet transforms in multivariate calibration problems.

    PubMed

    Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino

    2003-01-01

    A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151

  12. Design of multivariable feedback control systems via spectral assignment. [as applied to aircraft flight control

    NASA Technical Reports Server (NTRS)

    Liberty, S. R.; Mielke, R. R.; Tung, L. J.

    1981-01-01

    Applied research in the area of spectral assignment in multivariable systems is reported. A frequency domain technique for determining the set of all stabilizing controllers for a single feedback loop multivariable system is described. It is shown that decoupling and tracking are achievable using this procedure. The technique is illustrated with a simple example.

  13. Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods.

    PubMed

    Kalivas, John H; Héberger, Károly; Andries, Erik

    2015-04-15

    Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a "good" tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user's preference for the degree of balance between bias and variance ultimately decides the merits used in SRD and hence, the tuning parameter values ranked lowest by SRD for automatic selection. The SRD process is also shown to allow simultaneous comparison of different calibration methods for a particular data set in conjunction with tuning parameter selection. Because SRD evaluates consistency across multiple merits, decisions on how to combine and weight merits are avoided. To demonstrate the utility of SRD, a near infrared spectral data set and a

  14. Rapid detection of whey in milk powder samples by spectrophotometric and multivariate calibration.

    PubMed

    de Carvalho, Bruna Mara Aparecida; de Carvalho, Lorendane Millena; dos Reis Coimbra, Jane Sélia; Minim, Luis Antônio; de Souza Barcellos, Edilton; da Silva Júnior, Willer Ferreira; Detmann, Edenio; de Carvalho, Gleidson Giordano Pinto

    2015-05-01

    A rapid method for the detection and quantification of the adulteration of milk powder by the addition of whey was assessed by measuring glycomacropeptide protein using mid-infrared spectroscopy (MIR). Fluid milk samples were dried and then spiked with different concentrations of GMP and whey. Calibration models were developed using multivariate techniques, from spectral data. For the principal component analysis and discriminant analysis, excellent percentages of correct classification were achieved in accordance with the increase in the proportion of whey samples. For partial least squares regression analysis, the correlation coefficient (r) and root mean square error of prediction (RMSEP) in the best model were 0.9885 and 1.17, respectively. The rapid analysis, low cost monitoring and high throughput number of samples tested per unit time indicate that MIR spectroscopy may hold potential as a rapid and reliable method for detecting milk powder frauds using cheese whey. PMID:25529644

  15. Multivariate standardisation for non-linear calibration range in the chemiluminescence determination of chromium.

    PubMed

    Tortajada-Genaro, L A; Campíns-Falcó, P

    2007-05-15

    Multivariate standardisation is proposed for the successful chemiluminescence determination of chromium based on luminol-hydrogen peroxide reaction. In an extended concentration range, non-linear calibration model is needed. The studied instrumental situations were different detection cells, instruments, assemblies, time and their possible combinations. Chemiluminescence kinetic registers have been transferred using piecewise direct standardisation (PDS) method. The optimisation of transfer parameters has been carried out based on the prediction residual error criteria. Non-linear principal component regression (NL-PCR) and non-linear partial least square regression (NL-PLS) were chosen for modelling the relationship signal-concentration of transferred registers. Good accuracy and precision were obtained for water samples. The concentrations of chromium were statistically in agreement with reference method values and with recovery studies. Therefore, it is possible to transfer chemiluminescence curves without loosing ability of prediction, even the presence of a non-linear behaviour. PMID:19071716

  16. Correlation of quantitative sensorial descriptors and chromatographic signals of beer using multivariate calibration strategies.

    PubMed

    da Silva, Gilmare A; Maretto, Danilo A; Bolini, Helena Maria A; Teófilo, Reinaldo F; Augusto, Fabio; Poppi, Ronei J

    2012-10-01

    In this study, two important sensorial parameters of beer quality - bitterness and grain taste - were correlated with data obtained after headspace solid phase microextraction - gas chromatography with mass spectrometric detection (HS-SPME-GC-MS) analysis. Sensorial descriptors of 32 samples of Pilsner beers from different brands were previously estimated by conventional quantitative descriptive analyses (QDA). Areas of 54 compounds systematically found in the HS-SPME-GC-MS chromatograms were used as input data. Multivariate calibration models were established between the chromatographic areas and the sensorial parameters. The peaks (compounds) relevant to build each multivariate calibration model were determined by genetic algorithm (GA) and ordered predictors selection (OPS), tools for variable selection. GA selected 11 and 15 chromatographic peak areas, for bitterness and grain taste, respectively; while OPS selected 17 and 16 compounds for the same parameters. It could be noticed that seven variables were commonly pointed out by both variable selection methods to bitterness parameter and 10 variables were commonly selected to grain taste attribute. The peak areas most significant to the evaluation of the parameters found by both variable selection methods fed to the PLS algorithm to find the proper models. The obtained models estimated the sensorial descriptors with good accuracy and precision, showing that the utilised approaches were efficient in finding the evaluated correlations. Certainly, the combination of proper chemometric methodologies and instrumental data can be used as a potential tool for sensorial evaluation of foods and beverages, allowing for fast and secure replication of parameters usually measured by trained panellists. PMID:25005998

  17. An ensemble method based on uninformative variable elimination and mutual information for spectral multivariate calibration

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Wang, Jinyue; Wu, Tong; Qin, Xin; Li, Menglong

    2010-12-01

    Based on the combination of uninformative variable elimination (UVE), bootstrap and mutual information (MI), a simple ensemble algorithm, named ESPLS, is proposed for spectral multivariate calibration (MVC). In ESPLS, those uninformative variables are first removed; and then a preparatory training set is produced by bootstrap, on which a MI spectrum of retained variables is calculated. The variables that exhibit higher MI than a defined threshold form a subspace on which a candidate partial least-squares (PLS) model is constructed. This process is repeated. After a number of candidate models are obtained, a small part of models is picked out to construct an ensemble model by simple/weighted average. Four near/mid-infrared (NIR/MIR) spectral datasets concerning the determination of six components are used to verify the proposed ESPLS. The results indicate that ESPLS is superior to UVEPLS and its combination with MI-based variable selection (SPLS) in terms of both the accuracy and robustness. Besides, from the perspective of end-users, ESPLS does not increase the complexity of a calibration when enhancing its performance.

  18. Square wave voltammetry with multivariate calibration tools for determination of eugenol, carvacrol and thymol in honey.

    PubMed

    Tonello, Natalia; Moressi, Marcela Beatriz; Robledo, Sebastián Noel; D'Eramo, Fabiana; Marioli, Juan Miguel

    2016-09-01

    The simultaneous determination of eugenol (EU), thymol (Ty) and carvacrol (CA) in honey samples, employing square wave voltammetry (SWV) and chemometrics tools, is informed for the first time. For this purpose, a glassy carbon electrode (GCE) was used as working electrode. The operating conditions and influencing parameters (involving several chemical and instrumental parameters) were first optimized by cyclic voltammetry (CV). Thus, the effects of the scan rate, pH and analyte concentration on the electrochemical response of the above mentioned molecules were studied. The results show that the electrochemical responses of the three compounds are very similar and that the voltammetric traces present a high degree of overlap under all the experimental conditions used in this study. Therefore, two chemometric tools were tested to obtain the multivariate calibration model. One method was the partial least squares regression (PLS-1), which assumes a linear behaviour. The other nonlinear method was an artificial neural network (ANN). In this last case we used a supervised, feed-forward network with Levenberg-Marquardt back propagation training. From the accuracies and precisions analysis between nominal and estimated concentrations calculated by using both methods, it was inferred that the ANN method was a good model to quantify EU, Ty and CA in honey samples. Recovery percentages were between 87% and 104%, except for two samples whose values were 136% and 72%. The analytical methodology was simple, fast and accurate. PMID:27343610

  19. Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM

    ERIC Educational Resources Information Center

    Warner, Rebecca M.

    2007-01-01

    This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…

  20. An integrated approach to the simultaneous selection of variables, mathematical pre-processing and calibration samples in partial least-squares multivariate calibration.

    PubMed

    Allegrini, Franco; Olivieri, Alejandro C

    2013-10-15

    A new optimization strategy for multivariate partial-least-squares (PLS) regression analysis is described. It was achieved by integrating three efficient strategies to improve PLS calibration models: (1) variable selection based on ant colony optimization, (2) mathematical pre-processing selection by a genetic algorithm, and (3) sample selection through a distance-based procedure. Outlier detection has also been included as part of the model optimization. All the above procedures have been combined into a single algorithm, whose aim is to find the best PLS calibration model within a Monte Carlo-type philosophy. Simulated and experimental examples are employed to illustrate the success of the proposed approach. PMID:24054659

  1. Multivariate calibration modeling of liver oxygen saturation using near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Cingo, Ndumiso A.; Soller, Babs R.; Puyana, Juan C.

    2000-05-01

    The liver has been identified as an ideal site to spectroscopically monitor for changes in oxygen saturation during liver transplantation and shock because it is susceptible to reduced blood flow and oxygen transport. Near-IR spectroscopy, combined with multivariate calibration techniques, has been shown to be a viable technique for monitoring oxygen saturation changes in various organs in a minimally invasive manner. The liver has a dual system circulation. Blood enters the liver through the portal vein and hepatic artery, and leaves through the hepatic vein. Therefore, it is of utmost importance to determine how the liver NIR spectroscopic information correlates with the different regions of the hepatic lobule as the dual circulation flows from the presinusoidal space into the post sinusoidal region of the central vein. For NIR spectroscopic information to reliably represent the status of liver oxygenation, the NIR oxygen saturation should best correlate with the post-sinusoidal region. In a series of six pigs undergoing induced hemorrhagic chock, NIR spectra collected from the liver were used together with oxygen saturation reference data from the hepatic and portal veins, and an average of the two to build partial least-squares regression models. Results obtained from these models show that the hepatic vein and an average of the hepatic and portal veins provide information that is best correlate with NIR spectral information, while the portal vein reference measurement provides poorer correlation and accuracy. These results indicate that NIR determination of oxygen saturation in the liver can provide an assessment of liver oxygen utilization.

  2. Simultaneous determination of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate in pharmaceutical preparations using multivariate calibration 1

    NASA Astrophysics Data System (ADS)

    Samadi-Maybodi, Abdolraouf; Hassani Nejad-Darzi, Seyed Karim

    2010-04-01

    Resolution of binary mixtures of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable, principal component regression and hybrid linear analysis. Data of analysis were obtained from UV-vis spectra of the above compounds. The method of central composite design was used in the ranges of 1-15 mg L -1 for both calibration and validation sets. The models refinement procedure and their validation were performed by cross-validation. Figures of merit such as selectivity, sensitivity, analytical sensitivity and limit of detection were determined for all three compounds. The procedure was successfully applied to simultaneous determination of the above compounds in pharmaceutical tablets.

  3. Multivariate analysis applied to the study of spatial distributions found in drug-eluting stent coatings by confocal Raman microscopy.

    PubMed

    Balss, Karin M; Long, Frederick H; Veselov, Vladimir; Orana, Argjenta; Akerman-Revis, Eugena; Papandreou, George; Maryanoff, Cynthia A

    2008-07-01

    Multivariate data analysis was applied to confocal Raman measurements on stents coated with the polymers and drug used in the CYPHER Sirolimus-eluting Coronary Stents. Partial least-squares (PLS) regression was used to establish three independent calibration curves for the coating constituents: sirolimus, poly(n-butyl methacrylate) [PBMA], and poly(ethylene-co-vinyl acetate) [PEVA]. The PLS calibrations were based on average spectra generated from each spatial location profiled. The PLS models were tested on six unknown stent samples to assess accuracy and precision. The wt % difference between PLS predictions and laboratory assay values for sirolimus was less than 1 wt % for the composite of the six unknowns, while the polymer models were estimated to be less than 0.5 wt % difference for the combined samples. The linearity and specificity of the three PLS models were also demonstrated with the three PLS models. In contrast to earlier univariate models, the PLS models achieved mass balance with better accuracy. This analysis was extended to evaluate the spatial distribution of the three constituents. Quantitative bitmap images of drug-eluting stent coatings are presented for the first time to assess the local distribution of components. PMID:18510342

  4. Simultaneous Determination of 6-Mercaptopurine and its Oxidative Metabolites in Synthetic Solutions and Human Plasma using Spectrophotometric Multivariate Calibration Methods

    PubMed Central

    Sorouraddin, Mohammad-Hossein; Khani, Mohammad-Yaser; Amini, Kaveh; Naseri, Abdolhossein; Asgari, Davoud; Rashidi, Mohammad-Reza

    2011-01-01

    Introduction 6-Mercaptopurine (6MP) is an important chemotherapeutic drug in the conventional treatment of childhood acute lymphoblastic leukemia (ALL). It is catabolized to 6-thiouric acid (6TUA) through 8-hydroxo-6-mercaptopurine (8OH6MP) or 6-thioxanthine (6TX) intermediates. Methods High-performance liquid chromatography (HPLC) is usually used to determine the contents of therapeutic drugs, metabolites and other important biomedical analytes in biological samples. In the present study, the multivariate calibration methods, partial least squares (PLS-1) and principle component regression (PCR) have been developed and validated for the simultaneous determination of 6MP and its oxidative metabolites (6TUA, 8OH6MP and 6TX) without analyte separation in spiked human plasma. Mixtures of 6MP, 8-8OH6MP, 6TX and 6TUA have been resolved by PLS-1 and PCR to their UV spectra. Results Recoveries (%) obtained for 6MP, 8-8OH6MP, 6TX and 6TUA were 94.5-97.5, 96.6-103.3, 95.1-96.9 and 93.4-95.8, respectively, using PLS-1 and 96.7-101.3, 96.2-98.8, 95.8-103.3 and 94.3-106.1, respectively, using PCR. The NAS (Net analyte signal) concept was used to calculate multivariate analytical figures of merit such as limit of detection (LOD), selectivity and sensitivity. The limit of detections for 6MP, 8-8OH6MP, 6TX and 6TUA were calculated to be 0.734, 0.439, 0.797 and 0.482 μmol L-1, respectively, using PLS and 0.724, 0.418, 0783 and 0.535 μmol L-1, respectively, using PCR. HPLC was also applied as a validation method for simultaneous determination of these thiopurines in the synthetic solutions and human plasma. Conclusion Combination of spectroscopic techniques and chemometric methods (PLS and PCR) has provided a simple but powerful method for simultaneous analysis of multicomponent mixtures PMID:23678408

  5. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  6. Applying Isotopic Effect in ITS-90 SPRT Calibrations

    NASA Astrophysics Data System (ADS)

    Pavese, F.

    2014-07-01

    The International Temperature Scale of 1990 (ITS-90) defines exact values for all fixed-point temperatures. For example, for the standard platinum resistance thermometers (SPRT), at each fixed point, the measured SPRT resistance and the temperature defined in the ITS-90 are used as input data into the correction equations of the ITS-90. Starting from 2006, formal equations were added to the Technical Annex for the ITS-90 for computing the fixed-point temperatures of the substances of different isotopic compositions, presently the triple and vapor-pressure points, Ne triple point, and triple point. This paper addresses the issue of the method required to apply the procedure defined in the ITS-90 for the calibration of a SPRT, according to the new requirements. The required procedure does not involve a "correction" of the fixed-point temperatures, since they are defined exactly by the ITS-90, but requires instead, the re-computing of the measured resistances at the relevant fixed points. In those cases where resistance ratios with respect to the triple point of water are required, the re-computation must be first applied separately to the specific fixed points and to the triple point of water. In case the re-computation is not possible because of insufficient information on the isotopic composition of the sample used, an additional component must be added to the total uncertainty budget.

  7. Quantitative analysis of chromium in potatoes by laser-induced breakdown spectroscopy coupled with linear multivariate calibration.

    PubMed

    Chen, Tianbing; Huang, Lin; Yao, Mingyin; Hu, Huiqin; Wang, Caihong; Liu, Muhua

    2015-09-01

    Laser-induced breakdown spectroscopy (LIBS) coupled with the linear multivariate regression method was utilized to analyze chromium (Cr) quantitatively in potatoes. The plasma was generated using a Nd:YAG laser, and the spectra were acquired by an Andor spectrometer integrated with an ICCD detector. The models between intensity of LIBS characteristic line(s) and concentration of Cr were constructed to predict quantitatively the content of target. The unary, binary, ternary, and quaternary variables were chosen for verifying the accuracy of linear regression calibration curves. The intensity of characteristic lines Cr (CrI: 425.43, 427.48, 428.97 nm) and Ca (CaI: 422.67, 428.30, 430.25, 430.77, 431.86 nm) were used as input data for the multivariate calculations. According to the results of linear regression, the model of quaternary linear regression was established better in comparing with the other three models. A good agreement was observed between the actual content provided by atomic absorption spectrometry and the predicted value obtained by the quaternary linear regression model. And the relative error was below 5.5% for validation samples S1 and S2. The result showed that the multivariate approach can obtain better predicted accuracy than the univariate ones. The result also suggested that the LIBS technique coupled with the linear multivariate calibration method could be a great tool to predict heavy metals in farm products in a rapid manner even though samples have similar elemental compositions. PMID:26368908

  8. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    NASA Technical Reports Server (NTRS)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  9. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A satellite infrared (IR) technique for estimating rainfall over northern South America is presented. The objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. In this study, we apply the Convective-Stratiform Technique (CST) of Adler and Negri (1988). The parameters of the original technique were re-calibrated using coincident rainfall estimates (Olson et W., 2000) derived from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI) and GOES IR (11 micrometer) observations. Local circulations were found to play a major role in modulating the rainfall and its diurnal cycle. These included land/sea circulations (notably along the northeast Brazilian coast and in the Gulf of Panama), mountain/valley circulations (along the Andes Mountains), and circulations associated with the presence of rivers. This last category was examined in detail along the Amazon R. east of Manaus. There we found an early morning rainfall maximum along the river (5 LT at 58W, 3 LT at 56W). Rainfall avoids the river in the afternoon (12 LT and later), notably at 56 W. The width of the river seems to be generating a land/river circulation which enhances early morning rainfall but inhibits afternoon rainfall. Results are compared to ground-based radar data collected during the Large-Scale Biosphere-Atmosphere (LBA) experiment in southwest Brazil, to monthly raingages in northeastern Brazil, and to data from the TRMM Precipitation Radar.

  10. Determination of rice syrup adulterant concentration in honey using three-dimensional fluorescence spectra and multivariate calibrations

    NASA Astrophysics Data System (ADS)

    Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen

    2014-10-01

    To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower ± longan ± buckwheat ± rape) model were achieved as follow: RMSEP = 0.0235 and R = 0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration.

  11. Determination of rice syrup adulterant concentration in honey using three-dimensional fluorescence spectra and multivariate calibrations.

    PubMed

    Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen

    2014-10-15

    To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower±longan±buckwheat±rape) model were achieved as follow: RMSEP=0.0235 and R=0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration. PMID:24830631

  12. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE

    PubMed Central

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J.; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2016-01-01

    Alzheimer’s disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance. PMID:27499829

  13. Multivariate Curve Resolution Applied to Infrared Reflection Measurements of Soil Contaminated with an Organophosphorus Analyte

    SciTech Connect

    Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.; Shaver, Jeremy M.; Windig, Willem

    2006-07-01

    Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as a part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.

  14. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for simultaneous quantitation of Moexipril and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Tawakkol, Shereen M.; Farouk, M.; Elaziz, Omar Abd; Hemdan, A.; Shehata, Mostafa A.

    2014-12-01

    Three simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the simultaneous determination of Moexipril (MOX) and Hydrochlorothiazide (HCTZ) in pharmaceutical dosage form. The first method is the new extended ratio subtraction method (EXRSM) coupled to ratio subtraction method (RSM) for determination of both drugs in commercial dosage form. The second and third methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 10-60 and 2-30 for MOX and HCTZ in EXRSM method, respectively, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.

  15. Use of multivariate calibration models based on UV-Vis spectra for seawater quality monitoring in Tianjin Bohai Bay, China.

    PubMed

    Liu, Xianhua; Wang, Lili

    2015-01-01

    A series of ultraviolet-visible (UV-Vis) spectra from seawater samples collected from sites along the coastline of Tianjin Bohai Bay in China were subjected to multivariate partial least squares (PLS) regression analysis. Calibration models were developed for monitoring chemical oxygen demand (COD) and concentrations of total organic carbon (TOC). Three different PLS models were developed using the spectra from raw samples (Model-1), diluted samples (Model-2), and diluted and raw samples combined (Model-3). Experimental results showed that: (i) possible nonlinearities in the signal concentration relationships were well accounted for by the multivariate PLS model; (ii) the predicted values of COD and TOC fit the analytical values well; the high correlation coefficients and small root mean squared error of cross-validation (RMSECV) showed that this method can be used for seawater quality monitoring; and (iii) compared with Model-1 and Model-2, Model-3 had the highest coefficient of determination (R2) and the lowest number of latent variables. This latter finding suggests that only large data sets that include data representing different combinations of conditions (i.e., various seawater matrices) will produce stable site-specific regressions. The results of this study illustrate the effectiveness of the proposed method and its potential for use as a seawater quality monitoring technique. PMID:26442484

  16. Application of third-order multivariate calibration algorithms to the determination of carbaryl, naphthol and propoxur by kinetic spectroscopic measurements.

    PubMed

    Santa-Cruz, Pablo; García-Reiriz, Alejandro

    2014-10-01

    In the present work a new application of third-order multivariate calibration algorithms is presented, in order to quantify carbaryl, naphthol and propoxur using kinetic spectroscopic data. The time evolution of fluorescence data matrices was measured, in order to follow the alkaline hydrolysis of the pesticides mentioned above. This experimental system has the additional complexity that one of the analytes is the reaction product of another analyte, and this fact generates linear dependency problems between concentration profiles. The data were analyzed by three different methods: parallel factor analysis (PARAFAC), unfolded partial least-squares (U-PLS) and multi-dimensional partial least-squares (N-PLS); these last two methods were assisted with residual trilinearization (RTL) to model the presence of unexpected signals not included in the calibration step. The ability of the different algorithms to predict analyte concentrations was checked with validation samples. Samples with unexpected components, tiabendazole and carbendazim, were prepared and spiked water samples of a natural stream were used to check the recovered concentrations. The best results were obtained with U-PLS/RTL and N-PLS/RTL with an average of the limits of detection of 0.035 for carbaryl, 0.025 for naphthol and 0.090 for propoxur (mg L(-1)), because these two methods are more flexible regarding the structure of the data. PMID:25059185

  17. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. PMID:21238758

  18. Multivariate analyses applied to fetal, neonatal and pediatric MRI of neurodevelopmental disorders

    PubMed Central

    Levman, Jacob; Takahashi, Emi

    2015-01-01

    Multivariate analysis (MVA) is a class of statistical and pattern recognition methods that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of medical neuroimaging-related challenges including identifying variables associated with a measure of clinical importance (i.e. patient outcome), creating diagnostic tests, assisting in characterizing developmental disorders, understanding disease etiology, development and progression, assisting in treatment monitoring and much more. Compared to adults, imaging of developing immature brains has attracted less attention from MVA researchers. However, remarkable MVA research growth has occurred in recent years. This paper presents the results of a systematic review of the literature focusing on MVA technologies applied to neurodevelopmental disorders in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. The goal of this manuscript is to provide a concise review of the state of the scientific literature on studies employing brain MRI and MVA in a pre-adult population. Neurological developmental disorders addressed in the MVA research contained in this review include autism spectrum disorder, attention deficit hyperactivity disorder, epilepsy, schizophrenia and more. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in pediatric/neonatal/fetal brain MRI, the field is still young and considerable research growth remains ahead of us. PMID:26640765

  19. Multivariate Analyses Applied to Healthy Neurodevelopment in Fetal, Neonatal, and Pediatric MRI

    PubMed Central

    Levman, Jacob; Takahashi, Emi

    2016-01-01

    Multivariate analysis (MVA) is a class of statistical and pattern recognition techniques that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of neurological medical imaging related challenges including the evaluation of healthy brain development, the automated analysis of brain tissues and structures through image segmentation, evaluating the effects of genetic and environmental factors on brain development, evaluating sensory stimulation's relationship with functional brain activity and much more. Compared to adult imaging, pediatric, neonatal and fetal imaging have attracted less attention from MVA researchers, however, recent years have seen remarkable MVA research growth in pre-adult populations. This paper presents the results of a systematic review of the literature focusing on MVA applied to healthy subjects in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in brain MRI, the field is still young and significant research growth will continue into the future. PMID:26834576

  20. Multivariate analyses applied to fetal, neonatal and pediatric MRI of neurodevelopmental disorders.

    PubMed

    Levman, Jacob; Takahashi, Emi

    2015-01-01

    Multivariate analysis (MVA) is a class of statistical and pattern recognition methods that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of medical neuroimaging-related challenges including identifying variables associated with a measure of clinical importance (i.e. patient outcome), creating diagnostic tests, assisting in characterizing developmental disorders, understanding disease etiology, development and progression, assisting in treatment monitoring and much more. Compared to adults, imaging of developing immature brains has attracted less attention from MVA researchers. However, remarkable MVA research growth has occurred in recent years. This paper presents the results of a systematic review of the literature focusing on MVA technologies applied to neurodevelopmental disorders in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. The goal of this manuscript is to provide a concise review of the state of the scientific literature on studies employing brain MRI and MVA in a pre-adult population. Neurological developmental disorders addressed in the MVA research contained in this review include autism spectrum disorder, attention deficit hyperactivity disorder, epilepsy, schizophrenia and more. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in pediatric/neonatal/fetal brain MRI, the field is still young and considerable research growth remains ahead of us. PMID:26640765

  1. Multivariate Curve Resolution Applied to Hyperspectral Imaging Analysis of Chocolate Samples.

    PubMed

    Zhang, Xin; de Juan, Anna; Tauler, Romà

    2015-08-01

    This paper shows the application of Raman and infrared hyperspectral imaging combined with multivariate curve resolution (MCR) to the analysis of the constituents of commercial chocolate samples. The combination of different spectral data pretreatment methods allowed decreasing the high fluorescent Raman signal contribution of whey in the investigated chocolate samples. Using equality constraints during MCR analysis, estimations of the pure spectra of the chocolate sample constituents were improved, as well as their relative contributions and their spatial distribution on the analyzed samples. In addition, unknown constituents could be also resolved. White chocolate constituents resolved from Raman hyperspectral image indicate that, at macro scale, sucrose, lactose, fat, and whey constituents were intermixed in particles. Infrared hyperspectral imaging did not suffer from fluorescence and could be applied for white and milk chocolate. As a conclusion of this study, micro-hyperspectral imaging coupled to the MCR method is confirmed to be an appropriate tool for the direct analysis of the constituents of chocolate samples, and by extension, it is proposed for the analysis of other mixture constituents in commercial food samples. PMID:26162693

  2. Applying a multivariate statistical analysis model to evaluate the water quality of a watershed.

    PubMed

    Wu, Edward Ming-Yang; Kuo, Shu-Lung

    2012-12-01

    Multivariate statistics have been applied to evaluate the water quality data collected at six monitoring stations in the Feitsui Reservoir watershed of Taipei, Taiwan. The objective is to evaluate the mutual correlations among the various water quality parameters to reveal the primary factors that affect reservoir water quality, and the differences among the various water quality parameters in the watershed. In this study, using water quality samples collected over a period of two and a half years will effectively raise the efficacy and reliability of the factor analysis results. This will be a valuable reference for managing water pollution in the watershed. Additionally, results obtained using the proposed theory and method to analyze and interpret statistical data must be examined to verify their similarity to field data collected on the stream geographical and geological characteristics, the physical and chemical phenomena of stream self-purification, and the stream hydrological phenomena. In this research, the water quality data has been collected over two and a half years so that sufficient sets of water quality data are available to increase the stability, effectiveness, and reliability of the final factor analysis results. These data sets can be valuable references for managing, regulating, and remediating water pollution in a reservoir watershed. PMID:23342938

  3. Multivariable control theory applied to hierarchial attitude control for planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III; Russell, D. W.

    1972-01-01

    Multivariable control theory is applied to the design of a hierarchial attitude control system for the CARD space vehicle. The system selected uses reaction control jets (RCJ) and control moment gyros (CMG). The RCJ system uses linear signal mixing and a no-fire region similar to that used on the Skylab program; the y-axis and z-axis systems which are coupled use a sum and difference feedback scheme. The CMG system uses the optimum steering law and the same feedback signals as the RCJ system. When both systems are active the design is such that the torques from each system are never in opposition. A state-space analysis was made of the CMG system to determine the general structure of the input matrices (steering law) and feedback matrices that will decouple the axes. It is shown that the optimum steering law and proportional-plus-rate feedback are special cases. A derivation of the disturbing torques on the space vehicle due to the motion of the on-board television camera is presented. A procedure for computing an upper bound on these torques (given the system parameters) is included.

  4. Differential Evolution algorithm applied to FSW model calibration

    NASA Astrophysics Data System (ADS)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  5. Variable selection in multivariate calibration based on clustering of variable concept.

    PubMed

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. PMID:26703255

  6. Multivariate Calibration and Model Integrity for Wood Chemistry Using Fourier Transform Infrared Spectroscopy

    PubMed Central

    Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K.

    2015-01-01

    This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide better predictive capability while PCR exhibited a more precise estimate of loading peaks and suggests that PCR is better for model interpretation of key underlying functional groups. Specifically, when PCR was utilized, an error in peak loading of ±15 cm−1 from the true mean was quantified. Application of the first derivative appeared to assist in improving both PCR and PLS loading precision. Research results identified the wavenumbers important in the prediction of extractives, lignin, cellulose, and hemicellulose and further demonstrated the utility in FT-IR for rapid monitoring of wood chemistry. PMID:26576321

  7. Multivariate Calibration and Model Integrity for Wood Chemistry Using Fourier Transform Infrared Spectroscopy.

    PubMed

    Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K

    2015-01-01

    This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide better predictive capability while PCR exhibited a more precise estimate of loading peaks and suggests that PCR is better for model interpretation of key underlying functional groups. Specifically, when PCR was utilized, an error in peak loading of ±15 cm(-1) from the true mean was quantified. Application of the first derivative appeared to assist in improving both PCR and PLS loading precision. Research results identified the wavenumbers important in the prediction of extractives, lignin, cellulose, and hemicellulose and further demonstrated the utility in FT-IR for rapid monitoring of wood chemistry. PMID:26576321

  8. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  9. Identification of potential antioxidant compounds in the essential oil of thyme by gas chromatography with mass spectrometry and multivariate calibration techniques.

    PubMed

    Masoum, Saeed; Mehran, Mehdi; Ghaheri, Salehe

    2015-02-01

    Thyme species are used in traditional medicine throughout the world and are known for their antiseptic, antispasmodic, and antitussive properties. Also, antioxidant activity is one of the interesting properties of thyme essential oil. In this research, we aim to identify peaks potentially responsible for the antioxidant activity of thyme oil from chromatographic fingerprints. Therefore, the chemical compositions of hydrodistilled essential oil of thyme species from different regions were analyzed by gas chromatography with mass spectrometry and antioxidant activities of essential oils were measured by a 1,1-diphenyl-2-picrylhydrazyl radical scavenging test. Several linear multivariate calibration techniques with different preprocessing methods were applied to the chromatograms of thyme essential oils to indicate the peaks responsible for the antioxidant activity. These techniques were applied on data both before and after alignment of chromatograms with correlation optimized warping. In this study, orthogonal projection to latent structures model was found to be a good technique to indicate the potential antioxidant active compounds in the thyme oil due to its simplicity and repeatability. PMID:25403421

  10. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup.

    PubMed

    Moustafa, Azza A; Hegazy, Maha A; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision. PMID:26519913

  11. Feasibility study on variety identification of rice vinegars using visible and near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Liu, Fei; He, Yong; Wang, Li

    2008-02-01

    The feasibility of visible and near infrared (Vis/NIR) spectroscopy, in combination with a hybrid multivariate methods of partial least squares (PLS) analysis and BP neural network (BPNN), was investigated to identify the variety of rice vinegars with different internal qualities. Five varieties of rice vinegars were prepared and 225 samples (45 for each variety) were selected randomly for the calibration set, while 75 samples (15 for each variety) for the validation set. After some pretreatments with moving average and standard normal variate (SNV), partial least squares (PLS) analysis was implemented for the extraction of principal components (PCs), which would be used as the inputs of BP neural network (BPNN) according to their accumulative reliabilities. Finally, a PLS-BPNN model with sigmoid transfer function was achieved. The performance was validated by the 75 unknown samples in validation set. The threshold error of prediction was set as +/-0.1 and an excellent precision and recognition ratio of 100% was achieved. Simultaneously, certain effective wavelengths for the identification of varieties were proposed by x-loading weights and regression coefficients. The prediction results indicated that Vis/NIR spectroscopy could be used as a rapid and high precision method for the identification of different varieties of rice vinegars.

  12. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values.

    PubMed

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. PMID:27176001

  13. Investigating the discrimination potential of linear and nonlinear spectral multivariate calibrations for analysis of phenolic compounds in their binary and ternary mixtures and calculation pKa values

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2016-08-01

    Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.

  14. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis.

    PubMed

    Tencate, Alister J; Kalivas, John H; White, Alexander J

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  15. Simultaneous detection of trace metal ions in water by solid phase extraction spectroscopy combined with multivariate calibration

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping

    2016-04-01

    Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R2) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L.

  16. Simultaneous detection of trace metal ions in water by solid phase extraction spectroscopy combined with multivariate calibration.

    PubMed

    Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping

    2016-04-15

    Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R(2)) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L. PMID:26845581

  17. Determination of bromhexine in cough-cold syrups by absorption spectrophotometry and multivariate calibration using partial least-squares and hybrid linear analyses. Application of a novel method of wavelength selection.

    PubMed

    Goicoechea, H C; Olivieri, A C

    1999-07-12

    The mucolitic bromhexine [N-(2-amino-3,5-dibromobenzyl)-N-methylcyclohexylamine] has been determined in cough suppressant syrups by multivariate spectrophotometric calibration, together with partial least-squares (PLS-1) and hybrid linear analysis (HLA). Notwithstanding the spectral overlapping between bromhexine and syrup excipients, as well as the intrinsic variability of the latter in unknown samples, the recoveries are excellent. A novel method of wavelength selection was also applied, based on the concept of net analyte signal regression, as adapted to the HLA methodology. This method allows one to improve the performance of both PLS-1 and HLA in samples containing nonmodeled interferences. PMID:18967655

  18. A PID de-tuned method for multivariable systems, applied for HVAC plant

    NASA Astrophysics Data System (ADS)

    Ghazali, A. B.

    2015-09-01

    A simple yet effective de-tuning of PID parameters for multivariable applications has been described. Although the method is felt to have wider application it is simulated in a 3-input/ 2-output building energy management system (BEMS) with known plant dynamics. The controller performances such as the sum output squared error and total energy consumption when the system is at steady state conditions are studied. This tuning methodology can also be extended to reduce the number of PID controllers as well as the control inputs for specified output references that are necessary for effective results, i.e. with good regulation performances being maintained.

  19. Applying multivariate analysis as decision tool for evaluating sediment-specific remediation strategies.

    PubMed

    Pedersen, Kristine B; Lejon, Tore; Jensen, Pernille E; Ottosen, Lisbeth M

    2016-05-01

    Multivariate methodology was employed for finding optimum remediation conditions for electrodialytic remediation of harbour sediment from an Arctic location in Norway. The parts of the experimental domain in which both sediment- and technology-specific remediation objectives were met were identified. Objectives targeted were removal of the sediment-specific pollutants Cu and Pb, while minimising the effect on the sediment matrix by limiting the removal of naturally occurring metals while maintaining low energy consumption. Two different cell designs for electrochemical remediation were tested and final concentrations of Cu and Pb were below background levels in large parts of the experimental domain when operating at low current densities (<0.12 mA/cm(2)). However, energy consumption, remediation times and the effect on naturally occurring metals were different for the 2- and 3-compartment cells. PMID:26928331

  20. Determination of sucrose in date fruits (Phoenix dactylifera L.) growing in the Sultanate of Oman by NIR spectroscopy and multivariate calibration.

    PubMed

    Mabood, Fazal; Al-Harrasi, Ahmed; Boqué, Ricard; Jabeen, Farah; Hussain, Javid; Hafidh, A; Hind, K; Ahmed, M A G; Manzoor, A; Hussain, Hidayat; Ur Rehman, Najeeb; Iman, S H; Said, Jahina J; Hamood, Sara A

    2015-11-01

    A Near Infrared (NIR) spectroscopic method combined with multivariate calibration was developed for the determination of the amount of sucrose in date fruits growing in the Sultanate of Oman. In this study two groups of samples were used: one group of 48 sucrose standard solutions in the concentration range from 0.01% to 50% (w/v) and another group of 54 date fruit samples of 18 different varieties. The sucrose standard samples were split in two sets, i.e. one training set of 31 samples and one test set of 17 samples. All samples were measured with a NIR spectrophotometer in the wavelength range from 700 to 2500 nm. The spectra collected were preprocessed using baseline correction and Savitzky-Golay 1st derivative. Partial least-squares regression (PLSR) was used to build the regression model with the training set of 31 samples. This model was then validated by using random leave-one-out cross-validation. Later, the PLS regression model was externally validated by using the test set of 17 samples of known sucrose concentration. The root mean squared error of prediction (RMSEP) was found to be of 1.5%, which shows a good prediction ability of the model. Finally, the PLS model was applied to the spectra of 54 date fruit samples to quantify their sucrose amount. It was found that the Khalas, Barnia Nizwi, Ajwa Almadina, Maan, and Khunizi varieties contain high amounts of sucrose, i.e. ranging from 36% to 60%, while Naghal, Fardh, Nashu and Qash Tabaq varieties contain the least amount of sucrose, ranging from 3.5% to 8.1%. PMID:26048559

  1. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    PubMed Central

    Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  2. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations.

    PubMed

    Kaplan, Jonas T; Man, Kingson; Greening, Steven G

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202

  3. MERTIS: geometrical calibration of thermal infrared optical system by applying diffractive optical elements

    NASA Astrophysics Data System (ADS)

    Bauer, M.; Baumbach, D.; Buder, M.; Börner, A.; Grießbach, D.; Peter, G.; Santier, E.; Säuberlich, T.; Schischmanow, A.; Schrader, S.; Walter, I.

    2015-09-01

    Geometrical sensor calibration is essential for space applications based on high accuracy optical measurements, in this case for the thermal infrared push-broom imaging spectrometer MERTIS. The goal is the determination of the interior sensor orientation. A conventional method is to measure the line of sight for a subset of pixels by single pixel illumination with collimated light. To adjust angles, which define the line of sight of a pixel, a manipulator construction is used. A new method for geometrical sensor calibration is using Diffractive Optical Elements (DOE) in connection with laser beam equipment. Diffractive optical elements (DOE) are optical microstructures, which are used to split an incoming laser beam with a dedicated wavelength into a number of beams with well-known propagation directions. As the virtual sources of the diffracted beams are points at infinity, the resulting image is invariant against translation. This particular characteristic allows a complete geometrical sensor calibration with only one taken image avoiding complex adjustment procedures, resulting in a significant reduction of calibration effort. We present a new method for geometrical calibration of a thermal infrared optical system, including an thermal infrared test optics and the MERTIS spectrometer bolometer detector. The fundamentals of this new approach for geometrical infrared optical systems calibration by applying diffractive optical elements and the test equipment are shown.

  4. Chemometric methods applied to the calibration of a Vis-NIR sensor for gas engine's condition monitoring.

    PubMed

    Villar, Alberto; Gorritxategi, Eneko; Otaduy, Deitze; Ciria, Jose I; Fernandez, Luis A

    2011-10-31

    This paper describes the calibration process of a Visible-Near Infrared sensor for the condition monitoring of a gas engine's lubricating oil correlating transmittance oil spectra with the degradation of a gas engine's oil via a regression model. Chemometric techniques were applied to determine different parameters: Base Number (BN), Acid Number (AN), insolubles in pentane and viscosity at 40 °C. A Visible-Near Infrared (400-1100 nm) sensor developed in Tekniker research center was used to obtain the spectra of artificial and real gas engine oils. In order to improve sensor's data, different preprocessing methods such as smoothing by Saviztky-Golay, moving average with Multivariate Scatter Correction or Standard Normal Variate to eliminate the scatter effect were applied. A combination of these preprocessing methods was applied to each parameter. The regression models were developed by Partial Least Squares Regression (PLSR). In the end, it was shown that only some models were valid, fulfilling a set of quality requirements. The paper shows which models achieved the established validation requirements and which preprocessing methods perform better. A discussion follows regarding the potential improvement in the robustness of the models. PMID:21962360

  5. Multivariate class modeling techniques applied to multielement analysis for the verification of the geographical origin of chili pepper.

    PubMed

    Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio

    2016-09-01

    Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%). PMID:27041319

  6. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  7. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study.

    PubMed

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. PMID:27423110

  8. Correction of interstitial water changes in calibration methods applied to XRF core-scanning major elements in long sediment cores: Case study from the South China Sea

    NASA Astrophysics Data System (ADS)

    Chen, Quan; Kissel, Catherine; Govin, Aline; Liu, Zhifei; Xie, Xin

    2016-05-01

    Fast and nondestructive X-ray fluorescence (XRF) core scanning provides high-resolution element data that are widely used in paleoclimate studies. However, various matrix and specimen effects prevent the use of semiquantitative raw XRF core-scanning intensities for robust paleoenvironmental interpretations. We present here a case study of a 50.8 m-long piston Core MD12-3432 retrieved from the northern South China Sea. The absorption effect of interstitial water is identified as the major source of deviations between XRF core-scanning intensities and measured element concentrations. The existing two calibration methods, i.e., normalized median-scaled calibration (NMS) and multivariate log-ratio calibration (MLC), are tested with this sequence after the application of water absorption correction. The results indicate that an improvement is still required to appropriately correct the influence of downcore changes in interstitial water content in the long sediment core. Consequently, we implement a new polynomial water content correction in NMS and MLC methods, referred as NPS and P_MLC calibrations. Results calibrated by these two improved methods indicate that the influence of downcore water content changes is now appropriately corrected. We therefore recommend either of the two methods to be applied for robust paleoenvironmental interpretations of major elements measured by XRF-scanning in long sediment sequences with significant downcore interstitial water content changes.

  9. Multivariate analysis applied to agglomerated macrobenthic data from an unpolluted estuary.

    PubMed

    Conde, Anxo; Novais, Júlio M; Domínguez, Jorge

    2013-01-01

    We agglomerated species into higher taxonomic aggregations and functional groups to analyse environmental gradients in an unpolluted estuary. We then applied non-metric Multidimensional Scaling and Redundancy Analysis (RDA) for ordination of the agglomerated data matrices. The correlation between the ordinations produced by both methods was generally high. However, the performance of the RDA models depended on the data matrix used to fit the model. As a result, salinity and total nitrogen were only found significant when aggregated data matrices were used rather than species data matrix. We used the results to select a RDA model that explained a higher percentage of variance in the species data set than the parsimonious model. We conclude that the use of aggregated matrices may be considered complementary to the use of species data to obtain a broader insight into the distribution of macrobenthic assemblages in relation to environmental gradients. PMID:23684322

  10. Tailored Excitation for Multivariable Stability-Margin Measurement Applied to the X-31A Nonlinear Simulation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Burken, John J.

    1997-01-01

    Safety and productivity of the initial flight test phase of a new vehicle have been enhanced by developing the ability to measure the stability margins of the combined control system and vehicle in flight. One shortcoming of performing this analysis is the long duration of the excitation signal required to provide results over a wide frequency range. For flight regimes such as high angle of attack or hypersonic flight, the ability to maintain flight condition for this time duration is difficult. Significantly reducing the required duration of the excitation input is possible by tailoring the input to excite only the frequency range where the lowest stability margin is expected. For a multiple-input/multiple-output system, the inputs can be simultaneously applied to the control effectors by creating each excitation input with a unique set of frequency components. Chirp-Z transformation algorithms can be used to match the analysis of the results to the specific frequencies used in the excitation input. This report discusses the application of a tailored excitation input to a high-fidelity X-31A linear model and nonlinear simulation. Depending on the frequency range, the results indicate the potential to significantly reduce the time required for stability measurement.

  11. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. PMID:27380302

  12. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed. PMID:15848006

  13. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    SciTech Connect

    Tarifeño-Saldivia, Ariel E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  14. Calibration

    NASA Astrophysics Data System (ADS)

    Kunze, Hans-Joachim

    Commercial spectrographic systems are usually supplied with some wave-length calibration, but it is essential that the experimenter performs his own calibration for reliable measurements. A number of sources emitting well-known emission lines are available, and the best values of their wavelengths may be taken from data banks accessible on the internet. Data have been critically evaluated for many decades by the National Institute of Standards and Technology (NIST) of the USA [13], see also p. 3. Special data bases have been established by the astronomy and fusion communities (Appendix B).

  15. Exploration of attenuated total reflectance mid-infrared spectroscopy and multivariate calibration to measure immunoglobulin G in human sera.

    PubMed

    Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton

    2015-09-01

    Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. PMID:26003699

  16. Development of a multivariate calibration model for the determination of dry extract content in Brazilian commercial bee propolis extracts through UV-Vis spectroscopy

    NASA Astrophysics Data System (ADS)

    Barbeira, Paulo J. S.; Paganotti, Rosilene S. N.; Ássimos, Ariane A.

    2013-10-01

    This study had the objective of determining the content of dry extract of commercial alcoholic extracts of bee propolis through Partial Least Squares (PLS) multivariate calibration and electronic spectroscopy. The PLS model provided a good prediction of dry extract content in commercial alcoholic extracts of bee propolis in the range of 2.7 a 16.8% (m/v), presenting the advantage of being less laborious and faster than the traditional gravimetric methodology. The PLS model was optimized with outlier detection tests according to the ASTM E 1655-05. In this study it was possible to verify that a centrifugation stage is extremely important in order to avoid the presence of waxes, resulting in a more accurate model. Around 50% of the analyzed samples presented content of dry extract lower than the value established by Brazilian legislation, in most cases, the values found were different from the values claimed in the product's label.

  17. A flow system for generation of concentration perturbation in two-dimensional correlation near-infrared spectroscopy: application to variable selection in multivariate calibration.

    PubMed

    Pereira, Claudete Fernandes; Pasquini, Celio

    2010-05-01

    A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models. PMID:20482969

  18. p Ka determinations of xanthene derivates in aqueous solutions by multivariate analysis applied to UV-Vis spectrophotometric data

    NASA Astrophysics Data System (ADS)

    Batistela, Vagner Roberto; Pellosi, Diogo Silva; de Souza, Franciane Dutra; da Costa, Willian Ferreira; de Oliveira Santin, Silvana Maria; de Souza, Vagner Roberto; Caetano, Wilker; de Oliveira, Hueder Paulo Moisés; Scarminio, Ieda Spacino; Hioka, Noboru

    2011-09-01

    Xanthenes form to an important class of dyes which are widely used. Most of them present three acid-base groups: two phenolic sites and one carboxylic site. Therefore, the p Ka determination and the attribution of each group to the corresponding p Ka value is a very important feature. Attempts to obtain reliable p Ka through the potentiometry titration and the electronic absorption spectrophotometry using the first and second orders derivative failed. Due to the close p Ka values allied to strong UV-Vis spectral overlap, multivariate analysis, a powerful chemometric method, is applied in this work. The determination was performed for eosin Y, erythrosin B, and bengal rose B, and also for other synthesized derivatives such as 2-(3,6-dihydroxy-9-acridinyl) benzoic acid, 2,4,5,7-tetranitrofluorescein, eosin methyl ester, and erythrosin methyl ester in water. These last two compounds (esters) permitted to attribute the p Ka of the phenolic group, which is not easily recognizable for some investigated dyes. Besides the p Ka determination, the chemometry allowed for estimating the electronic spectrum of some prevalent protolytic species and the substituents effects evaluation.

  19. Applying knowledge engineering and representation methods to improve support vector machine and multivariate probabilistic neural network CAD performance

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut

    2005-04-01

    Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.

  20. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  1. Direct estimation of dissolved organic carbon using synchronous fluorescence and independent component analysis (ICA): advantages of a multivariate calibration.

    PubMed

    De Almeida Brehm, Franciane; de Azevedo, Julio Cesar R; da Costa Pereira, Jorge; Burrows, Hugh D

    2015-11-01

    Dissolved organic carbon (DOC) is frequently used as a diagnostic parameter for the identification of environmental contamination in aqueous systems. Since this organic matter is evolving and decaying over time. If samples are collected under environmental conditions, some sample stabilization process is needed until the corresponding analysis can be made. This may affect the analysis results. This problem can be avoided using the direct determination of DOC. We report a study using in situ synchronous fluorescence spectra, with independent component analysis to retrieve relevant major spectral contributions and their respective component contributions, for the direct determination of DOC. Fluorescence spectroscopy is a very powerful and sensitive technique to evaluate vestigial organic matter dissolved in water and is thus suited for the analytical task of direct monitoring of dissolved organic matter in water, thus avoiding the need for the stabilization step. We also report the development of an accurate calibration model for dissolved organic carbon determinations using environmental samples of humic and fulvic acids. The method described opens the opportunity for a fast, in locus, DOC estimation in environmental or other field studies using a portable fluorescence spectrometer. This combines the benefits of the use of fresh samples, without the need of stabilizers, and also allows the interpretation of various additional spectral contributions based on their respective estimated properties. We show how independent component analysis may be used to describe tyrosine, tryptophan, humic acid and fulvic acid spectra and, thus, to retrieve the respective individual component contribution to the DOC. PMID:26497563

  2. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  3. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  4. Spectral variable selection for partial least squares calibration applied to authentication and quantification of extra virgin olive oils using Fourier transform Raman spectroscopy.

    PubMed

    Heise, H Michael; Damm, Uwe; Lampen, Peter; Davies, Antony N; McIntyre, Peter S

    2005-10-01

    The limits of quantitative multivariate assays for the analysis of extra virgin olive oil samples from various Greek sites adulterated by sunflower oil have been evaluated based on their Fourier transform (FT) Raman spectra. Different strategies for wavelength selection were tested for calculating optimal partial least squares (PLS) models. Compared to the full spectrum methods previously applied, the optimum standard error of prediction (SEP) for the sunflower oil concentrations in spiked olive oil samples could be significantly reduced. One efficient approach (PMMS, pair-wise minima and maxima selection) used a special variable selection strategy based on a pair-wise consideration of significant respective minima and maxima of PLS regression vectors, calculated for broad spectral intervals and a low number of PLS factors. PMMS provided robust calibration models with a small number of variables. On the other hand, the Tabu search strategy recently published (search process guided by restrictions leading to Tabu list) achieved lower SEP values but at the cost of extensive computing time when searching for a global minimum and less robust calibration models. Robustness was tested by using packages of ten and twenty randomly selected samples within cross-validation for calculating independent prediction values. The best SEP values for a one year's harvest with a total number of 66 Cretian samples were obtained by such spectral variable optimized PLS calibration models using leave-20-out cross-validation (values between 0.5 and 0.7% by weight). For the more complex population of olive oil samples from all over Greece (total number of 92 samples), results were between 0.7 and 0.9% by weight with a cross-validation sample package size of 20. Notably, the calibration method with Tabu variable selection has been shown to be a valid chemometric approach by which a single model can be applied with a low SEP of 1.4% for olive oil samples across three different harvest years

  5. Towards an effective calibration theory for a broadly applied land surface model (VIC)

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano

    2014-05-01

    The Variable Infiltration Capacity (VIC, Liang et al., 1994) model has been used for a broad range of applications, in hydrology as well as in the fields of climate and global change. Despite the attention for the model and its output, calibration is often not performed. To improve the calibration procedures for VIC applied at grid resolutions varying from meso-scale catchments to the 1 km 'hyper'resolution now used in several global modeling studies, the parameters of the model are studied in more detail. An earlier sensitivity analysis study on a selection of parameters of the VIC model by Demaria et al (2007) showed that the model is not or hardly sensitive to many of its parameters. With improved sensitivity analysis methods and computational power, this study focuses on a broader spectrum of parameters and with state of the art methods: both the DELSA sensitivity analysis method (Rakovec et al., 2013) and the ABC-method (Vrugt et al., 2013) will be employed parallel to a single cell VIC model of the Rietholzbach in Switzerland (representative of the 1 km hyperresolution), and a single and multiple-cell VIC model of the meso-scale Thur basin in Switzerland. In the latter case, also routing plays an important role. With critically screening the parameters of the model, it is possible to define a frame work for calibration of the model at multiple scales. References Demaria, E., B. Nijssen, and T. Wagener (2007), Monte Carlo sensitivity analysis of land surface parameters using the Variable Infiltration Capacity model, J. Geophys. Res., 112, D11,113. Liang, X., D. Lettenmaier, E. Wood, and S. Burges (1994), A simple hydrologically based model of land surface water and energy fluxes for general circulation models, J. Geophys. Res., 99 (D7),14,415-14,458. Rakovec, O., M. Hill, M. Clark, A. Weerts, A. Teuling, and R. Uijlenhoet (2013), A new computationally frugal method for sensitivity analysis of environmental models, Water Resour. Res., in press Vrugt, J.A. and M

  6. Simultaneous determination of vitamin B12 and its derivatives using some of multivariate calibration 1 (MVC1) techniques

    NASA Astrophysics Data System (ADS)

    Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad

    2008-10-01

    Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.

  7. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  8. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  9. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  10. Comparative study between univariate spectrophotometry and multivariate calibration as analytical tools for quantitation of Benazepril alone and in combination with Amlodipine.

    PubMed

    Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A

    2014-04-01

    Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. PMID:24424258

  11. Applying transport-distance specific SOC distribution to calibrate soil erosion model WaTEM

    NASA Astrophysics Data System (ADS)

    Hu, Yaxian; Heckrath, Goswin J.; Kuhn, Nikolaus J.

    2016-04-01

    Slope-scale soil erosion, transport and deposition fundamentally decide the spatial redistribution of eroded sediments in terrestrial and aquatic systems, which further affect the burial and decomposition of eroded SOC. However, comparisons of SOC contents between upper eroding slope and lower depositional site cannot fully reflect the movement of eroded SOC in-transit along hillslopes. The actual transport distance of eroded SOC is decided by its settling velocity. So far, the settling velocity distribution of eroded SOC is mostly calculated from mineral particle specific SOC distribution. Yet, soil is mostly eroded in form of aggregates, and the movement of aggregates differs significantly from individual mineral particles. This urges a SOC erodibility parameter based on actual transport distance distribution of eroded fractions to better calibrate soil erosion models. Previous field investigation on a freshly seeded cropland in Denmark has shown immediate deposition of fast settling soil fractions and the associated SOC at footslopes, followed by a fining trend at the slope tail. To further quantify the long-term effects of topography on erosional redistribution of eroded SOC, the actual transport-distance specific SOC distribution observed on the field was applied to a soil erosion model WaTEM (based on USLE). After integrating with local DEM, our calibrated model succeeded in locating the hotspots of enrichment/depletion of eroded SOC on different topographic positions, much better corresponding to the real-world field observation. By extrapolating into repeated erosion events, our projected results on the spatial distribution of eroded SOC are also adequately consistent with the SOC properties in the consecutive sample profiles along the slope.

  12. Calibration and uncertainty issues of a hydrological model (SWAT) applied to West Africa

    NASA Astrophysics Data System (ADS)

    Schuol, J.; Abbaspour, K. C.

    2006-09-01

    Distributed hydrological models like SWAT (Soil and Water Assessment Tool) are often highly over-parameterized, making parameter specification and parameter estimation inevitable steps in model calibration. Manual calibration is almost infeasible due to the complexity of large-scale models with many objectives. Therefore we used a multi-site semi-automated inverse modelling routine (SUFI-2) for calibration and uncertainty analysis. Nevertheless, the question of when a model is sufficiently calibrated remains open, and requires a project dependent definition. Due to the non-uniqueness of effective parameter sets, parameter calibration and prediction uncertainty of a model are intimately related. We address some calibration and uncertainty issues using SWAT to model a four million km2 area in West Africa, including mainly the basins of the river Niger, Volta and Senegal. This model is a case study in a larger project with the goal of quantifying the amount of global country-based available freshwater. Annual and monthly simulations with the "calibrated" model for West Africa show promising results in respect of the freshwater quantification but also point out the importance of evaluating the conceptual model uncertainty as well as the parameter uncertainty.

  13. Applying Multivariate Clustering Techniques to Health Data: The 4 Types of Healthcare Utilization in the Paris Metropolitan Area

    PubMed Central

    Lefèvre, Thomas; Rondet, Claire; Parizot, Isabelle; Chauvin, Pierre

    2014-01-01

    Background Cost containment policies and the need to satisfy patients’ health needs and care expectations provide major challenges to healthcare systems. Identification of homogeneous groups in terms of healthcare utilisation could lead to a better understanding of how to adjust healthcare provision to society and patient needs. Methods This study used data from the third wave of the SIRS cohort study, a representative, population-based, socio-epidemiological study set up in 2005 in the Paris metropolitan area, France. The data were analysed using a cross-sectional design. In 2010, 3000 individuals were interviewed in their homes. Non-conventional multivariate clustering techniques were used to determine homogeneous user groups in data. Multinomial models assessed a wide range of potential associations between user characteristics and their pattern of healthcare utilisation. Results We identified four distinct patterns of healthcare use. Patterns of consumption and the socio-demographic characteristics of users differed qualitatively and quantitatively between these four profiles. Extensive and intensive use by older, wealthier and unhealthier people contrasted with narrow and parsimonious use by younger, socially deprived people and immigrants. Rare, intermittent use by young healthy men contrasted with regular targeted use by healthy and wealthy women. Conclusion The use of an original technique of massive multivariate analysis allowed us to characterise different types of healthcare users, both in terms of resource utilisation and socio-demographic variables. This method would merit replication in different populations and healthcare systems. PMID:25506916

  14. Total anthocyanin content determination in intact açaí (Euterpe oleracea Mart.) and palmitero-juçara (Euterpe edulis Mart.) fruit using near infrared spectroscopy (NIR) and multivariate calibration.

    PubMed

    Inácio, Maria Raquel Cavalcanti; de Lima, Kássio Michell Gomes; Lopes, Valquiria Garcia; Pessoa, José Dalton Cruz; de Almeida Teixeira, Gustavo Henrique

    2013-02-15

    The aim of this study was to evaluate near-infrared reflectance spectroscopy (NIR), and multivariate calibration potential as a rapid method to determinate anthocyanin content in intact fruit (açaí and palmitero-juçara). Several multivariate calibration techniques, including partial least squares (PLS), interval partial least squares, genetic algorithm, successive projections algorithm, and net analyte signal were compared and validated by establishing figures of merit. Suitable results were obtained with the PLS model (four latent variables and 5-point smoothing) with a detection limit of 6.2 g kg(-1), limit of quantification of 20.7 g kg(-1), accuracy estimated as root mean square error of prediction of 4.8 g kg(-1), mean selectivity of 0.79 g kg(-1), sensitivity of 5.04×10(-3) g kg(-1), precision of 27.8 g kg(-1), and signal-to-noise ratio of 1.04×10(-3) g kg(-1). These results suggest NIR spectroscopy and multivariate calibration can be effectively used to determine anthocyanin content in intact açaí and palmitero-juçara fruit. PMID:23194509

  15. Sequential injection kinetic spectrophotometric determination of quaternary mixtures of carbamate pesticides in water and fruit samples using artificial neural networks for multivariate calibration

    NASA Astrophysics Data System (ADS)

    Chu, Ning; Fan, Shihua

    2009-12-01

    A new analytical method was developed for the simultaneous kinetic spectrophotometric determination of a quaternary carbamate pesticide mixture consisting of carbofuran, propoxur, metolcarb and fenobucarb using sequential injection analysis (SIA). The procedure was based upon the different kinetic properties between the analytes reacted with reagent in flow system in the non-stopped-flow mode, in which their hydrolysis products coupled with diazotized p-nitroaniline in an alkaline medium to form the corresponding colored complexes. The absorbance data from SIA peak time profile were recorded at 510 nm and resolved by the use of back-propagation-artificial neural network (BP-ANN) algorithms for multivariate quantitative analysis. The experimental variables and main network parameters were optimized and each of the pesticides could be determined in the concentration range of 0.5-10.0 μg mL -1, at a sampling frequency of 18 h -1. The proposed method was compared to other spectrophotometric methods for simultaneous determination of mixtures of carbamate pesticides, and it was proved to be adequately reliable and was successfully applied to the simultaneous determination of the four pesticide residues in water and fruit samples, obtaining the satisfactory results based on recovery studies (84.7-116.0%).

  16. Practical application of electromyogram radiotelemetry: the suitability of applying laboratory-acquired calibration data to field data

    SciTech Connect

    Geist, David R. ); Brown, Richard S.; Lepla, Ken; Chandler, James P.

    2001-12-01

    One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groups of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.

  17. Copula Multivariate analysis of Gross primary production and its hydro-environmental driver; A BIOME-BGC model applied to the Antisana páramos

    NASA Astrophysics Data System (ADS)

    Minaya, Veronica; Corzo, Gerald; van der Kwast, Johannes; Galarraga, Remigio; Mynett, Arthur

    2014-05-01

    Simulations of carbon cycling are prone to uncertainties from different sources, which in general are related to input data, parameters and the model representation capacities itself. The gross carbon uptake in the cycle is represented by the gross primary production (GPP), which deals with the spatio-temporal variability of the precipitation and the soil moisture dynamics. This variability associated with uncertainty of the parameters can be modelled by multivariate probabilistic distributions. Our study presents a novel methodology that uses multivariate Copulas analysis to assess the GPP. Multi-species and elevations variables are included in a first scenario of the analysis. Hydro-meteorological conditions that might generate a change in the next 50 or more years are included in a second scenario of this analysis. The biogeochemical model BIOME-BGC was applied in the Ecuadorian Andean region in elevations greater than 4000 masl with the presence of typical vegetation of páramo. The change of GPP over time is crucial for climate scenarios of the carbon cycling in this type of ecosystem. The results help to improve our understanding of the ecosystem function and clarify the dynamics and the relationship with the change of climate variables. Keywords: multivariate analysis, Copula, BIOME-BGC, NPP, páramos

  18. Comparative study for determination of some polycyclic aromatic hydrocarbons ‘PAHs' by a new spectrophotometric method and multivariate calibration coupled with dispersive liquid-liquid extraction

    NASA Astrophysics Data System (ADS)

    Abdel-Aziz, Omar; El Kosasy, A. M.; El-Sayed Okeil, S. M.

    2014-12-01

    A modified dispersive liquid-liquid extraction (DLLE) procedure coupled with spectrophotometric techniques was adopted for simultaneous determination of naphthalene, anthracene, benzo(a)pyrene, alpha-naphthol and beta-naphthol in water samples. Two different methods were used, partial least-squares (PLS) method and a new derivative ratio method, namely extended derivative ratio (EDR). A PLS-2 model was established for simultaneous determination of the studied pollutants in methanol, by using twenty mixtures as calibration set and five mixtures as validation set. Also, in methanol a novel (EDR) method was developed for determination of the studied pollutants, where each component in the mixture of the five PAHs was determined by using a mixture of the other four components as divisor. Chemometric and EDR methods could be also adopted for determination of the studied PAH in water samples after transferring them from aqueous medium to the organic one by utilizing dispersive liquid-liquid extraction technique, where different parameters were investigated using a full factorial design. Both methods were compared and the proposed method was validated according to ICH guidelines and successfully applied to determine these PAHs simultaneously in spiked water samples, where satisfactory results were obtained. All the results obtained agreed with those of published methods, where no significant difference was observed.

  19. Near-infrared spectroscopy quantitative determination of Pefloxacin mesylate concentration in pharmaceuticals by using partial least squares and principal component regression multivariate calibration

    NASA Astrophysics Data System (ADS)

    Xie, Yunfei; Song, Yan; Zhang, Yong; Zhao, Bing

    2010-05-01

    Pefloxacin mesylate, a broad-spectrum antibacterial fluoroquinolone, has been widely used in clinical practice. Therefore, it is very important to detect the concentration of Pefloxacin mesylate. In this research, the near-infrared spectroscopy (NIRS) has been applied to quantitatively analyze on 108 injection samples, which was divided into a calibration set containing 89 samples and a prediction set containing 19 samples randomly. In order to get a satisfying result, partial least square (PLS) regression and principal components regression (PCR) have been utilized to establish quantitative models. Also, the process of establishing the models, parameters of the models, and prediction results were discussed in detail. In the PLS regression, the values of the coefficient of determination ( R2) and root mean square error of cross-validation (RMSECV) of PLS regression are 0.9263 and 0.00119, respectively. For comparison, though applying PCR method to get the values of R2 and RMSECV we obtained are 0.9685 and 0.00108, respectively. And the values of the standard error of prediction set (SEP) of PLS and PCR models are 0.001480 and 0.001140. The result of the prediction set suggests that these two quantitative analysis models have excellent generalization ability and prediction precision. However, for this PFLX injection samples, the PCR quantitative analysis model achieved more accurate results than the PLS model. The experimental results showed that NIRS together with PCR method provide rapid and accurate quantitative analysis of PFLX injection samples. Moreover, this study supplied technical support for the further analysis of other injection samples in pharmaceuticals.

  20. Dead-blow hammer design applied to a calibration target mechanism to dampen excessive rebound

    NASA Technical Reports Server (NTRS)

    Lim, Brian Y.

    1991-01-01

    An existing rotary electromagnetic driver was specified to be used to deploy and restow a blackbody calibration target inside of a spacecraft infrared science instrument. However, this target was much more massive than any other previously inherited design applications. The target experienced unacceptable bounce when reaching its stops. Without any design modification, the momentum generated by the driver caused the target to bounce back to its starting position. Initially, elastomeric dampers were used between the driver and the target. However, this design could not prevent the bounce, and it compromised the positional accuracy of the calibration target. A design that successfully met all the requirements incorporated a sealed pocket 85 percent full of 0.75 mm diameter stainless steel balls in the back of the target to provide the effect of a dead-blow hammer. The energy dissipation resulting from the collision of balls in the pocket successfully dampened the excess momentum generated during the target deployment. The disastrous effects of new requirements on a design with a successful flight history, the modifications that were necessary to make the device work, and the tests performed to verify its functionality are described.

  1. Performance Analysis of Extracted Rule-Base Multivariable Type-2 Self-Organizing Fuzzy Logic Controller Applied to Anesthesia

    PubMed Central

    Fan, Shou-Zen; Shieh, Jiann-Shing

    2014-01-01

    We compare type-1 and type-2 self-organizing fuzzy logic controller (SOFLC) using expert initialized and pretrained extracted rule-bases applied to automatic control of anaesthesia during surgery. We perform experimental simulations using a nonfixed patient model and signal noise to account for environmental and patient drug interaction uncertainties. The simulations evaluate the performance of the SOFLCs in their ability to control anesthetic delivery rates for maintaining desired physiological set points for muscle relaxation and blood pressure during a multistage surgical procedure. The performances of the SOFLCs are evaluated by measuring the steady state errors and control stabilities which indicate the accuracy and precision of control task. Two sets of comparisons based on using expert derived and extracted rule-bases are implemented as Wilcoxon signed-rank tests. Results indicate that type-2 SOFLCs outperform type-1 SOFLC while handling the various sources of uncertainties. SOFLCs using the extracted rules are also shown to outperform those using expert derived rules in terms of improved control stability. PMID:25587533

  2. Confocal Raman microscopy and multivariate statistical analysis for determination of different penetration abilities of caffeine and propylene glycol applied simultaneously in a mixture on porcine skin ex vivo.

    PubMed

    Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E

    2016-07-01

    Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin. PMID:27108784

  3. Assessment of Coastal and Urban Flooding Hazards Applying Extreme Value Analysis and Multivariate Statistical Techniques: A Case Study in Elwood, Australia

    NASA Astrophysics Data System (ADS)

    Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik

    2016-04-01

    Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.

  4. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  5. Light calibration and quality assessment methods for Reflectance Transformation Imaging applied to artworks' analysis

    NASA Astrophysics Data System (ADS)

    Giachetti, A.; Daffara, C.; Reghelin, C.; Gobbetti, E.; Pintus, R.

    2015-06-01

    In this paper we analyze some problems related to the acquisition of multiple illumination images for Polynomial Texture Maps (PTM) or generic Reflectance Transform Imaging (RTI). We show that intensity and directionality nonuniformity can be a relevant issue when acquiring manual sets of images with the standard highlight-based setup both using a flash lamp and a LED light. To maintain a cheap and flexible acquisition setup that can be used on field and by non-experienced users we propose to use a dynamic calibration and correction of the lights based on multiple intensity and direction estimation around the imaged object during the acquisition. Preliminary tests on the results obtained have been performed by acquiring a specifically designed 3D printed pattern in order to see the accuracy of the acquisition obtained both for spatial discrimination of small structures and normal estimation, and on samples of different types of paper in order to evaluate material discrimination. We plan to design and build from our analysis and from the tools developed and under development a set of novel procedures and guidelines that can be used to turn the cheap and common RTI acquisition setup from a simple way to enrich object visualization into a powerful method for extracting quantitative characterization both of surface geometry and of reflective properties of different materials. These results could have relevant applications in the Cultural Heritage domain, in order to recognize different materials used in paintings or investigate the ageing status of artifacts' surface.

  6. Multivariate curve resolution applied to in situ X-ray absorption spectroscopy data: an efficient tool for data processing and analysis.

    PubMed

    Voronov, Alexey; Urakawa, Atsushi; van Beek, Wouter; Tsakoumis, Nikolaos E; Emerich, Hermann; Rønning, Magnus

    2014-08-20

    Large datasets containing many spectra commonly associated with in situ or operando experiments call for new data treatment strategies as conventional scan by scan data analysis methods have become a time-consuming bottleneck. Several convenient automated data processing procedures like least square fitting of reference spectra exist but are based on assumptions. Here we present the application of multivariate curve resolution (MCR) as a blind-source separation method to efficiently process a large data set of an in situ X-ray absorption spectroscopy experiment where the sample undergoes a periodic concentration perturbation. MCR was applied to data from a reversible reduction-oxidation reaction of a rhenium promoted cobalt Fischer-Tropsch synthesis catalyst. The MCR algorithm was capable of extracting in a highly automated manner the component spectra with a different kinetic evolution together with their respective concentration profiles without the use of reference spectra. The modulative nature of our experiments allows for averaging of a number of identical periods and hence an increase in the signal to noise ratio (S/N) which is efficiently exploited by MCR. The practical and added value of the approach in extracting information from large and complex datasets, typical for in situ and operando studies, is highlighted. PMID:25086889

  7. Multivariate statistical monitoring as applied to clean-in-place (CIP) and steam-in-place (SIP) operations in biopharmaceutical manufacturing.

    PubMed

    Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir

    2014-01-01

    Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations. PMID:24532460

  8. Multivariate optimization by exploratory analysis applied to the determination of microelements in fruit juice by inductively coupled plasma optical emission spectrometry

    NASA Astrophysics Data System (ADS)

    Froes, Roberta Eliane Santos; Neto, Waldomiro Borges; Silva, Nilton Oliveira Couto e.; Naveira, Rita Lopes Pereira; Nascentes, Clésia Cristina; da Silva, José Bento Borba

    2009-06-01

    A method for the direct determination (without sample pre-digestion) of microelements in fruit juice by inductively coupled plasma optical emission spectrometry has been developed. The method has been optimized by a 2 3 factorial design, which evaluated the plasma conditions (nebulization gas flow rate, applied power, and sample flow rate). A 1:1 diluted juice sample with 2% HNO 3 (Tetra Packed, peach flavor) and spiked with 0.5 mg L - 1 of Al, Ba, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Sn, and Zn was employed in the optimization. The results of the factorial design were evaluated by exploratory analysis (Hierarchical Cluster Analysis, HCA, and Principal Component Analysis, PCA) to determine the optimum analytical conditions for all elements. Central point condition differentiation (0.75 L min - 1 , 1.3 kW, and 1.25 mL min - 1 ) was observed for both methods, Principal Component Analysis and Hierarchical Cluster Analysis, with higher analytical signal values, suggesting that these are the optimal analytical conditions. F and t-student tests were used to compare the slopes of the calibration curves for aqueous and matrix-matched standards. No significant differences were observed at 95% confidence level. The correlation coefficient was higher than 0.99 for all the elements evaluated. The limits of quantification were: Al 253, Cu 3.6, Fe 84, Mn 0.4, Zn 71, Ni 67, Cd 69, Pb 129, Sn 206, Cr 79, Co 24, and Ba 2.1 µg L - 1 . The spiking experiments with fruit juice samples resulted in recoveries between 80 and 120%, except for Co and Sn. Al, Cd, Pb, Sn and Cr could not be quantified in any of the samples investigated. The method was applied to the determination of several elements in fruit juice samples commercialized in Brazil.

  9. Solid-phase extraction and simultaneous determination of trace amounts of sulphonated and azo sulphonated dyes using microemulsion-modified-zeolite and multivariate calibration.

    PubMed

    Al-Degs, Yahya S; El-Sheikh, Amjad H; Al-Ghouti, Mohammad A; Hemmateenejad, Bahram; Walker, Gavin M

    2008-05-30

    A simple and rapid analytical method for the determination of trace levels of five sulphonated and azo sulphonated reactive dyes: Cibacron Reactive Blue 2 (C-Blue, trisulphonated dye), Cibacron Reactive Red 4 (C-Red, tetrasulphonated azo dye), Cibacron Reactive Yellow 2 (C-Yellow, trisulphonated azo dye), Levafix Brilliant Red E-4BA (L-Red, trisulphonated dye), and Levafix Brilliant Blue E-4BA (L-Blue, disulphonated dye) in water is presented. Initially, the dyes were preconcentrated from 250 ml of water samples with solid-phase extraction using natural zeolite sample previously modified with a microemulsion. The modified zeolite exhibited an excellent extraction for the dyes from solution. The parameters that influence quantitative recovery of reactive dyes like amount of extractant, volume of dye solution, pH, ionic strength, and extraction-elution flow rate were varied and optimized. After elution of the adsorbed dyes, the concentration of dyes was determined spectrophotometrically with the aid of principle component regression (PCR) method without separation of dyes. The results obtained from PCR method were comparable to those obtained from HPLC method confirming the effectiveness of the proposed method. With the aid of SPE by M-zeolite, the concentration of dyes could be reproducibly detected over the range 25-200 ppb for C-Yellow and L-Blue and from 50 to 250 ppb for C-Blue, C-Red, and L-Red. The multivariate detection limits of dyes were found to be 15 ppb for C-Yellow and L-Blue and 25 ppb for C-Blue, C-Red, and L-Red dyes. The proposed chemometric method gave recoveries from 85.4 to 115.3% and R.S.D. from 1.0 to 14.5% for determination of the five dyes without any prior separation for solutes. PMID:18585163

  10. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks

    PubMed Central

    Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236

  11. Multivariate Calibration Approach for Quantitative Determination of Cell-Line Cross Contamination by Intact Cell Mass Spectrometry and Artificial Neural Networks.

    PubMed

    Valletta, Elisa; Kučera, Lukáš; Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr

    2016-01-01

    Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236

  12. Full spectrum and selected spectrum based multivariate calibration methods for simultaneous determination of betamethasone dipropionate, clotrimazole and benzyl alcohol: Development, validation and application on commercial dosage form.

    PubMed

    Darwish, Hany W; Elzanfaly, Eman S; Saad, Ahmed S; Abdelaleem, Abdelaziz El-Bayoumi

    2016-12-01

    Five different chemometric methods were developed for the simultaneous determination of betamethasone dipropionate (BMD), clotrimazole (CT) and benzyl alcohol (BA) in their combined dosage form (Lotriderm® cream). The applied methods included three full spectrum based chemometric techniques; namely principal component regression (PCR), Partial Least Squares (PLS) and Artificial Neural Networks (ANN), while the other two methods were PLS and ANN preceded by genetic algorithm procedure (GA-PLS and GA-ANN) as a wavelength selection procedure. A multilevel multifactor experimental design was adopted for proper construction of the models. A validation set composed of 12 mixtures containing different ratios of the three analytes was used to evaluate the predictive power of the suggested models. All the proposed methods except ANN, were successfully applied for the analysis of their pharmaceutical formulation (Lotriderm® cream). Results demonstrated the efficiency of the four methods as quantitative tool for analysis of the three analytes without prior separation procedures and without any interference from the co-formulated excipient. Additionally, the work highlighted the effect of GA on increasing the predictive power of PLS and ANN models. PMID:27327260

  13. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  14. Combination of GC/FID/Mass spectrometry fingerprints and multivariate calibration techniques for recognition of antimicrobial constituents of Myrtus communis L. essential oil.

    PubMed

    Ebrahimabadi, Ebrahim H; Ghoreishi, Sayed Mehdi; Masoum, Saeed; Ebrahimabadi, Abdolrasoul H

    2016-01-01

    Myrtus communis L. is an aromatic evergreen shrub and its essential oil possesses known powerful antimicrobial activity. However, the contribution of each component of the plant essential oil in observed antimicrobial ability is unclear. In this study, chemical components of the essential oil samples of the plant were identified qualitatively and quantitatively using GC/FID/Mass spectrometry system, antimicrobial activity of these samples against three microbial strains were evaluated and, these two set of data were correlated using chemometrics methods. Three chemometric methods including principal component regression (PCR), partial least squares (PLS) and orthogonal projections to latent structures (OPLS) were applied for the study. These methods showed similar results, but, OPLS was selected as preferred method due to its predictive and interpretational ability, facility, repeatability and low time-consuming. The results showed that α-pinene, 1,8 cineole, β-pinene and limonene are the highest contributors in antimicrobial properties of M. communis essential oil. Other researches have reported high antimicrobial activities for the plant essential oils rich in these compounds confirming our findings. PMID:26625337

  15. Application of linear multivariate calibration techniques to identify the peaks responsible for the antioxidant activity of Satureja hortensis L. and Oliveria decumbens Vent. essential oils by gas chromatography-mass spectrometry.

    PubMed

    Samadi, Naser; Masoum, Saeed; Mehrara, Bahare; Hosseini, Hossein

    2015-09-15

    Satureja hortensis L. and Oliveria decumbens Vent. are known for their diverse effects in drug therapy and traditional medicine. One of the most interesting properties of their essential oils is good antioxidant activity. In this paper, essential oils of aerial parts of S. hortensis L. and O. decumbens Vent. from different regions were obtained by hydrodistillation and were analyzed by gas chromatography-mass spectrometry (GC-MS). Essential oils were tested for their free radical scavenging activity using 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay to identify the peaks potentially responsible for the antioxidant activity from chromatographic fingerprints by numerous linear multivariate calibration techniques. Because of its simplicity and high repeatability, orthogonal projection to latent structures (OPLS) model had the best performance in indicating the potential antioxidant compounds in S. hortensis L. and O. decumbens Vent. essential oils. In this study, P-cymene, carvacrol and β-bisabolene for S. hortensis L. and P-cymene, Ç-terpinen, thymol, carvacrol, and 1,3-benzodioxole, 4-methoxy-6-(2-propenyl) for O. decumbens Vent. are suggested as the potentially antioxidant compounds. PMID:26262598

  16. Multivariate calibration-assisted high-performance liquid chromatography with dual UV and fluorimetric detection for the analysis of natural and synthetic sex hormones in environmental waters and sediments.

    PubMed

    Pérez, Rocío L; Escandar, Graciela M

    2016-02-01

    A green method is reported based on non-sophisticated instrumental for the quantification of seven natural and synthetic estrogens, three progestagens and one androgen in the presence of real interferences. The method takes advantage of: (1) chromatography, allowing total or partial resolution of a large number of compounds, (2) dual detection, permitting selection of the most appropriate signal for each analyte and, (3) second-order calibration, enabling mathematical resolution of incompletely resolved chromatographic bands and analyte determination in the presence of interferents. Consumption of organic solvents for cleaning, extraction and separation are markedly decreased because of the coupling with MCR-ALS (multivariate curve resolution/alternating least-squares) which allows the successful resolution in the presence of other co-eluting matrix constituents. Rigorous IUPAC detection limits were obtained: 6-24 ng L(-1) in water, and 0.1-0.9 ng g(-1) in sediments. Relative prediction errors were 2-10% (water) and 1-8% (sediments). PMID:26650083

  17. Multivariate statistical tools applied to the characterization of the proteomic profiles of two human lymphoma cell lines by two-dimensional gel electrophoresis.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Bobba, Marco; Liparota, Maria Cristina; Rustichelli, Chiara; Zamò, Alberto; Chilosi, Marco; Righetti, Pier Giorgio

    2006-02-01

    Mantle cell lymphoma (MCL) cell lines have been difficult to generate, since only few have been described so far and even fewer have been thoroughly characterized. Among them, there is only one cell line, called GRANTA-519, which is well established and universally adopted for most lymphoma studies. We succeeded in establishing a new MCL cell line, called MAVER-1, from a leukemic MCL, and performed a thorough phenotypical, cytogenetical and molecular characterization of the cell line. In the present report, the phenotypic expression of GRANTA-519 and MAVER-1 cell lines has been compared and evaluated by a proteomic approach, exploiting 2-D map analysis. By univariate statistical analysis (Student's t-test, as commonly used in most commercial software packages), most of the protein spots were found to be identical between the two cell lines. Thirty spots were found to be unique for the GRANTA-519, whereas another 11 polypeptides appeared to be expressed only by the MAVER-1 cell line. A number of these spots could be identified by MS. These data were confirmed and expanded by multivariate statistical tools (principal component analysis and soft-independent model of class analogy) that allowed identification of a larger number of differently expressed spots. Multivariate statistical tools have the advantage of reducing the risk of false positives and of identifying spots that are significantly altered in terms of correlated expression rather than absolute expression values. It is thus suggested that, in future work in differential proteomic profiling, both univariate and multivariate statistical tools should be adopted. PMID:16372308

  18. Multivariate postprocessing techniques for probabilistic hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2016-04-01

    Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power

  19. A flow-batch analyzer with piston propulsion applied to automatic preparation of calibration solutions for Mn determination in mineral waters by ET AAS.

    PubMed

    Almeida, Luciano F; Vale, Maria G R; Dessuy, Morgana B; Silva, Márcia M; Lima, Renato S; Santos, Vagner B; Diniz, Paulo H D; Araújo, Mário C U

    2007-10-31

    The increasing development of miniaturized flow systems and the continuous monitoring of chemical processes require dramatically simplified and cheap flow schemes and instrumentation with large potential for miniaturization and consequent portability. For these purposes, the development of systems based on flow and batch technologies may be a good alternative. Flow-batch analyzers (FBA) have been successfully applied to implement analytical procedures, such as: titrations, sample pre-treatment, analyte addition and screening analysis. In spite of its favourable characteristics, the previously proposed FBA uses peristaltic pumps to propel the fluids and this kind of propulsion presents high cost and large dimension, making unfeasible its miniaturization and portability. To overcome these drawbacks, a low cost, robust, compact and non-propelled by peristaltic pump FBA is proposed. It makes use of a lab-made piston coupled to a mixing chamber and a step motor controlled by a microcomputer. The piston-propelled FBA (PFBA) was applied for automatic preparation of calibration solutions for manganese determination in mineral waters by electrothermal atomic-absorption spectrometry (ET AAS). Comparing the results obtained with two sets of calibration curves (five by manual and five by PFBA preparations), no significant statistical differences at a 95% confidence level were observed by applying the paired t-test. The standard deviation of manual and PFBA procedures were always smaller than 0.2 and 0.1mugL(-1), respectively. By using PFBA it was possible to prepare about 80 calibration solutions per hour. PMID:19073119

  20. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  1. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  2. A new and consistent parameter for measuring the quality of multivariate analytical methods: Generalized analytical sensitivity.

    PubMed

    Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C

    2016-08-24

    Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. PMID:27496995

  3. Morphological evolution of coherent multivariant Ti{sub 11}Ni{sub 14} precipitates in Ti-Ni alloys under an applied stress -- A computer simulation study

    SciTech Connect

    Li, D.Y.; Chen, L.Q.

    1998-01-05

    Coherent precipitation of multi-variant Ti{sub 11}Ni{sup 14} precipitates in TiNi alloys was investigated by employing a continuum field kinetic model. The structural difference between the precipitate phase and the matrix as well as the orientational differences between precipitate variants are distinguished by nonconserved structural field variables, whereas the compositional difference between the precipitate and matrix is described by a conserved field variable. The temporal evolution of the spatially dependent field variables is determined by numerically solving the time-dependent Ginzburg-Landau (TDGL) equations for the structural variables and the Cahn-Hilliard diffusion equation for the composition. In particular, the interaction between precipitates, and the growth morphology of Ti{sub 11}Ni{sub 14} precipitates under strain-constraints were studied, without a priori assumptions on the precipitate shape and distribution. The predicted morphology and distribution of Ti{sub 11}Ni{sub 14} variants were compared with experimental observations. Excellent agreement between the simulation and experimental observations was found.

  4. Langley method applied in study of aerosol optical depth in the Brazilian semiarid region using 500, 670 and 870 nm bands for sun photometer calibration

    NASA Astrophysics Data System (ADS)

    Cerqueira, J. G.; Fernandez, J. H.; Hoelzemann, J. J.; Leme, N. M. P.; Sousa, C. T.

    2014-10-01

    Due to the high costs of commercial monitoring instruments, a portable sun photometer was developed at INPE/CRN laboratories, operating in four bands, with two bands in the visible spectrum and two in near infrared. The instrument calibration process is performed by applying the classical Langley method. Application of the Langley’s methodology requires a site with high optical stability during the measurements, which is usually found in high altitudes. However, far from being an ideal site, Harrison et al. (1994) report success with applying the Langley method to some data for a site in Boulder, Colorado. Recently, Liu et al. (2011) show that low elevation sites, far away from urban and industrial centers can provide a stable optical depth, similar to high altitudes. In this study we investigated the feasibility of applying the methodology in the semiarid region of northeastern Brazil, far away from pollution areas with low altitudes, for sun photometer calibration. We investigated optical depth stability using two periods of measurements in the year during dry season in austral summer. The first one was in December when the native vegetation naturally dries, losing all its leaves and the second one was in September in the middle of the dry season when the vegetation is still with leaves. The data were distributed during four days in December 2012 and four days in September 2013 totaling eleven half days of collections between mornings and afternoons and by means of fitted line to the data V0 values were found. Despite the high correlation between the collected data and the fitted line, the study showed a variation between the values of V0 greater than allowed for sun photometer calibration. The lowest V0 variation reached in this experiment with values lower than 3% for the bands 500, 670 and 870 nm are displayed in tables. The results indicate that the site needs to be better characterized with studies in more favorable periods, soon after the rainy season.

  5. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  6. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  7. Synthetic Multivariate Models to Accommodate Unmodeled Interfering Components During Quantitative Spectral Analyses

    SciTech Connect

    Haaland, David M.

    1999-07-14

    The analysis precision of any multivariate calibration method will be severely degraded if unmodeled sources of spectral variation are present in the unknown sample spectra. This paper describes a synthetic method for correcting for the errors generated by the presence of unmodeled components or other sources of unmodeled spectral variation. If the spectral shape of the unmodeled component can be obtained and mathematically added to the original calibration spectra, then a new synthetic multivariate calibration model can be generated to accommodate the presence of the unmodeled source of spectral variation. This new method is demonstrated for the presence of unmodeled temperature variations in the unknown sample spectra of dilute aqueous solutions of urea, creatinine, and NaCl. When constant-temperature PLS models are applied to spectra of samples of variable temperature, the standard errors of prediction (SEP) are approximately an order of magnitude higher than that of the original cross-validated SEPs of the constant-temperature partial least squares models. Synthetic models using the classical least squares estimates of temperature from pure water or variable-temperature mixture sample spectra reduce the errors significantly for the variable temperature samples. Spectrometer drift adds additional error to the analyte determinations, but a method is demonstrated that can minimize the effect of drift on prediction errors through the measurement of the spectra of a small subset of samples during both calibration and prediction. In addition, sample temperature can be predicted with high precision with this new synthetic model without the need to recalibrate using actual variable-temperature sample data. The synthetic methods eliminate the need for expensive generation of new calibration samples and collection of their spectra. The methods are quite general and can be applied using any known source of spectral variation and can be used with any multivariate

  8. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  9. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  10. Implicit Spacecraft Gyro Calibration

    NASA Technical Reports Server (NTRS)

    Harman, Richard; Bar-Itzhack, Itzhack Y.

    2003-01-01

    This paper presents an implicit algorithm for spacecraft onboard instrument calibration, particularly to onboard gyro calibration. This work is an extension of previous work that was done where an explicit gyro calibration algorithm was applied to the AQUA spacecraft gyros. The algorithm presented in this paper was tested using simulated data and real data that were downloaded from the Microwave Anisotropy Probe (MAP) spacecraft. The calibration tests gave very good results. A comparison between the use of the implicit calibration algorithm used here with the explicit algorithm used for AQUA spacecraft indicates that both provide an excellent estimation of the gyro calibration parameters with similar accuracies.

  11. Simultaneous determination of propranolol and amiloride in synthetic binary mixtures and pharmaceutical dosage forms by synchronous fluorescence spectroscopy: a multivariate approach

    NASA Astrophysics Data System (ADS)

    Divya, O.; Shinde, Mandakini

    2013-07-01

    A multivariate calibration model for the simultaneous estimation of propranolol (PRO) and amiloride (AMI) using synchronous fluorescence spectroscopic data has been presented in this paper. Two multivariate techniques, PCR (Principal Component Regression) and PLSR (Partial Least Square Regression), have been successfully applied for the simultaneous determination of AMI and PRO in synthetic binary mixtures and pharmaceutical dosage forms. The SF spectra of AMI and PRO (calibration mixtures) were recorded at several concentrations within their linear range between wavelengths of 310 and 500 nm at an interval of 1 nm. Calibration models were constructed using 32 samples and validated by varying the concentrations of AMI and PRO in the calibration range. The results indicated that the model developed was very robust and able to efficiently analyze the mixtures with low RMSEP values.

  12. Multivariate Padé Approximations For Solving Nonlinear Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Turut, V.

    2015-11-01

    In this paper, multivariate Padé approximation is applied to power series solutions of nonlinear diffusion equations. As it is seen from tables, multivariate Padé approximation (MPA) gives reliable solutions and numerical results.

  13. Error estimates for ocean surface winds: Applying Desroziers diagnostics to the Cross-Calibrated, Multi-Platform analysis of wind speed

    NASA Astrophysics Data System (ADS)

    Hoffman, Ross N.; Ardizzone, Joseph V.; Leidner, S. Mark; Smith, Deborah K.; Atlas, Robert M.

    2013-04-01

    The cross-calibrated, multi-platform (CCMP) ocean surface wind project [Atlas et al., 2011] generates high-quality, high-resolution, vector winds over the world's oceans beginning with the 1987 launch of the SSM/I F08, using Remote Sensing Systems (RSS) microwave satellite wind retrievals, as well as in situ observations from ships and buoys. The variational analysis method [VAM, Hoffman et al., 2003] is at the center of the CCMP project's analysis procedures for combining observations of the wind. The VAM was developed as a smoothing spline and so implicitly defines the background error covariance by means of several constraints with adjustable weights, and does not provide an explicit estimate of the analysis error. Here we report on our research to develop uncertainty estimates for wind speed for the VAM inputs and outputs, i.e., for the background (B), the observations (O) and the analysis (A) wind speed, based on the Desroziers et al. [2005] diagnostics (DD hereafter). The DD are applied to the CCMP ocean surface wind data sets to estimate wind speed errors of the ECMWF background, the microwave satellite observations and the resulting CCMP analysis. The DD confirm that the ECMWF operational surface wind speed error standard deviations vary with latitude in the range 0.7-1.5 m/s and that the cross-calibrated Remote Sensing Systems (RSS) wind speed retrievals standard deviations are in the range 0.5-0.8 m/s. Further the estimated CCMP analysis wind speed standard deviations are in the range 0.2-0.4 m/s. The results suggests the need to revise the parameterization of the errors due to the FGAT (first guess at the appropriate time) procedure. Errors for wind speeds < 16 m/s are homogeneous, but for the relatively rare, but critical higher wind speed situations, errors are much larger. Atlas, R., R. N. Hoffman, J. Ardizzone, S. M. Leidner, J. C. Jusem, D. K. Smith, and D. Gombos, A cross-calibrated, multi-platform ocean surface wind velocity product for

  14. Improving self-calibration

    NASA Astrophysics Data System (ADS)

    Enßlin, Torsten A.; Junklewitz, Henrik; Winderling, Lars; Greiner, Maksim; Selig, Marco

    2014-10-01

    Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

  15. Problems with Multivariate Normality: Can the Multivariate Bootstrap Help?

    ERIC Educational Resources Information Center

    Thompson, Bruce

    Multivariate normality is required for some statistical tests. This paper explores the implications of violating the assumption of multivariate normality and illustrates a graphical procedure for evaluating multivariate normality. The logic for using the multivariate bootstrap is presented. The multivariate bootstrap can be used when distribution…

  16. A multivariate CAR model for mismatched lattices.

    PubMed

    Porter, Aaron T; Oleson, Jacob J

    2014-10-01

    In this paper, we develop a multivariate Gaussian conditional autoregressive model for use on mismatched lattices. Most current multivariate CAR models are designed for each multivariate outcome to utilize the same lattice structure. In many applications, a change of basis will allow different lattices to be utilized, but this is not always the case, because a change of basis is not always desirable or even possible. Our multivariate CAR model allows each outcome to have a different neighborhood structure which can utilize different lattices for each structure. The model is applied in two real data analysis. The first is a Bayesian learning example in mapping the 2006 Iowa Mumps epidemic, which demonstrates the importance of utilizing multiple channels of infection flow in mapping infectious diseases. The second is a multivariate analysis of poverty levels and educational attainment in the American Community Survey. PMID:25457598

  17. Weighted partial least squares method to improve calibration precision for spectroscopic noise-limited data

    SciTech Connect

    Haaland, D.M.; Jones, H.D.T.

    1997-09-01

    Multivariate calibration methods have been applied extensively to the quantitative analysis of Fourier transform infrared (FT-IR) spectral data. Partial least squares (PLS) methods have become the most widely used multivariate method for quantitative spectroscopic analyses. Most often these methods are limited by model error or the accuracy or precision of the reference methods. However, in some cases, the precision of the quantitative analysis is limited by the noise in the spectroscopic signal. In these situations, the precision of the PLS calibrations and predictions can be improved by the incorporation of weighting in the PLS algorithm. If the spectral noise of the system is known (e.g., in the case of detector-noise-limited cases), then appropriate weighting can be incorporated into the multivariate spectral calibrations and predictions. A weighted PLS (WPLS) algorithm was developed to improve the precision of the analysis in the case of spectral-noise-limited data. This new PLS algorithm was then tested with real and simulated data, and the results compared with the unweighted PLS algorithm. Using near-infrared (NIR) calibration precision when the WPLS algorithm was applied. The best WPLS method improved prediction precision for the analysis of one of the minor components by a factor of nearly 9 relative to the unweighted PLS algorithm.

  18. Multivariate Data EXplorer (MDX)

    SciTech Connect

    Steed, Chad Allen

    2012-08-01

    The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views whereby selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.

  19. An Integrated Flexible Self-calibration Approach for 2D Laser Scanning Range Finders Applied to the Hokuyo UTM-30LX-EW

    NASA Astrophysics Data System (ADS)

    Mader, D.; Westfeld, P.; Maas, H.-G.

    2014-06-01

    The paper presents a flexible approach for the geometric calibration of a 2D infrared laser scanning range finder. It does not require spatial object data, thus avoiding the time-consuming determination of reference distances or coordinates with superior accuracy. The core contribution is the development of an integrated bundle adjustment, based on the flexible principle of a self-calibration. This method facilitates the precise definition of the geometry of the scanning device, including the estimation of range-measurement-specific correction parameters. The integrated calibration routine jointly adjusts distance and angular data from the laser scanning range finder as well as image data from a supporting DSLR camera, and automatically estimates optimum observation weights. The validation process carried out using a Hokuyo UTM-30LX-EW confirms the correctness of the proposed functional and stochastic contexts and allows detailed accuracy analyses. The level of accuracy of the observations is computed by variance component estimation. For the Hokuyo scanner, we obtained 0.2% of the measured distance in range measurement and 0.2 deg for the angle precision. The RMS error of a 3D coordinate after the calibration becomes 5 mm in lateral and 9 mm in depth direction. Particular challenges have arisen due to a very large elliptical laser beam cross-section of the scanning device used.

  20. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  1. Multivariate Intraclass Correlation.

    ERIC Educational Resources Information Center

    Wiley, David E.; Hawkes, Thomas H.

    This paper is an explication of a statistical model which will permit an interpretable intraclass correlation coefficient that is negative, and a generalized extension of that model to cover a multivariate problem. The methodological problem has its practical roots in an attempt to find a statistic which could indicate the degree of similarity or…

  2. Multivariate postprocessing techniques for probabilistic hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Lisniak, D.; Klein, B.

    2015-09-01

    Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both location and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well-calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. The domain of this study covers three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. In this study, the two approaches to model the temporal dependence structure are ensemble copula coupling (ECC), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA), which estimates the temporal correlations from training observations. The results indicate that both methods are suitable for modeling the temporal dependencies of probabilistic hydrologic forecasts.

  3. Energy calibration via correlation

    NASA Astrophysics Data System (ADS)

    Maier, Daniel; Limousin, Olivier

    2016-03-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be less than ~ 0.1 keV. Energy calibration via correlation can be applied to any kind of calibration spectra and shows a robust behavior at low counting statistics. It enables a fast and accurate calibration that can be used to monitor the spectroscopic properties of a detector system in near realtime.

  4. Multivariate processing strategies for enhancing qualitative and quantitative analysis based on infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Wan, Boyong

    2007-12-01

    Airborne passive Fourier transform infrared spectrometry is gaining increased attention in environmental applications because of its great flexibility. Usually, pattern recognition techniques are used for automatic analysis of large amount of collected data. However, challenging problems are the constantly changing background and high calibration cost. As aircraft is flying, background is always changing. Also, considering the great variety of backgrounds and high expense of data collection from aircraft, cost of collecting representative training data is formidable. Instead of using airborne data, data generated from simulation strategies can be used for training purposes. Training data collected under controlled conditions on the ground or synthesized from real backgrounds can be both options. With both strategies, classifiers may be developed with much lower cost. For both strategies, signal processing techniques need to be used to extract analyte features. In this dissertation, signal processing methods are applied either in interferogram or spectral domain for features extraction. Then, pattern recognition methods are applied to develop binary classifiers for automated detection of air-collected methanol and ethanol vapors. The results demonstrate, with optimized signal processing methods and training set composition, classifiers trained from ground-collected or synthetic data can give good classification on real air-collected data. Near-infrared (NIR) spectrometry is emerging as a promising tool for noninvasive blood glucose detection. In combination with multivariate calibration techniques, NIR spectroscopy can give quick quantitative determinations of many species with minimal sample preparation. However, one main problem with NIR calibrations is degradation of calibration model over time. The varying background information will worsen the prediction precision and complicate the multivariate models. To mitigate the needs for frequent recalibration and

  5. Calibration of Germanium Resistance Thermometers

    NASA Technical Reports Server (NTRS)

    Ladner, D.; Urban, E.; Mason, F. C.

    1987-01-01

    Largely completed thermometer-calibration cryostat and probe allows six germanium resistance thermometers to be calibrated at one time at superfluid-helium temperatures. In experiments involving several such thermometers, use of this calibration apparatus results in substantial cost savings. Cryostat maintains temperature less than 2.17 K through controlled evaporation and removal of liquid helium from Dewar. Probe holds thermometers to be calibrated and applies small amount of heat as needed to maintain precise temperature below 2.17 K.

  6. Multivariate Analysis in Metabolomics

    PubMed Central

    Worley, Bradley; Powers, Robert

    2015-01-01

    Metabolomics aims to provide a global snapshot of all small-molecule metabolites in cells and biological fluids, free of observational biases inherent to more focused studies of metabolism. However, the staggeringly high information content of such global analyses introduces a challenge of its own; efficiently forming biologically relevant conclusions from any given metabolomics dataset indeed requires specialized forms of data analysis. One approach to finding meaning in metabolomics datasets involves multivariate analysis (MVA) methods such as principal component analysis (PCA) and partial least squares projection to latent structures (PLS), where spectral features contributing most to variation or separation are identified for further analysis. However, as with any mathematical treatment, these methods are not a panacea; this review discusses the use of multivariate analysis for metabolomics, as well as common pitfalls and misconceptions. PMID:26078916

  7. Multivariate Data EXplorer (MDX)

    Energy Science and Technology Software Center (ESTSC)

    2012-08-01

    The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views wherebymore » selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.« less

  8. COSIMA data analysis using multivariate techniques

    NASA Astrophysics Data System (ADS)

    Silén, J.; Cottin, H.; Hilchenbach, M.; Kissel, J.; Lehto, H.; Siljeström, S.; Varmuza, K.

    2015-02-01

    We describe how to use multivariate analysis of complex TOF-SIMS (time-of-flight secondary ion mass spectrometry) spectra by introducing the method of random projections. The technique allows us to do full clustering and classification of the measured mass spectra. In this paper we use the tool for classification purposes. The presentation describes calibration experiments of 19 minerals on Ag and Au substrates using positive mode ion spectra. The discrimination between individual minerals gives a cross-validation Cohen κ for classification of typically about 80%. We intend to use the method as a fast tool to deduce a qualitative similarity of measurements.

  9. Optimal and multivariable control of a turbogenerator

    NASA Astrophysics Data System (ADS)

    Lahoud, M. A.; Harley, R. G.; Secker, A.

    The use of modern control methods to design multivariable controllers which improve the performance of a turbogenerator was investigated. The turbogenerator nonlinear mathematical model from which a linearized model is deduced is presented. The inverse Nyquist Array method and the theory of optimal control are both applied to the linearized model to generate two alternative control schemes. The schemes are implemented on the nonlinear simulation model to assess their dynamic performance. Results from modern multivariable control schemes are compared with the classical automatic voltage regulator and speed governor system.

  10. Sparse Multivariate Regression With Covariance Estimation

    PubMed Central

    Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji

    2014-01-01

    We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance structure. An efficient optimization algorithm and a fast approximation are developed for computing MRCE. Using simulation studies, we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. An R-package containing this dataset and code for computing MRCE and its approximation are available online. PMID:24963268

  11. Method of multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2004-01-06

    A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).

  12. System identification for multivariable control

    NASA Astrophysics Data System (ADS)

    Vanzee, G. A.

    1981-05-01

    System identification methods and modern control theory are applied to industrial processes. These processes must often be controlled in order to meet certain requirements with respect to the product quality, safety, energy consumption, and environmental load. Modern control system design methods which take the occurring interaction phenomena and stochastic disturbances into account are used. An accurate dynamic mathematical model of the process, by theoretical modelling and/or by system identification is obtained. The computational aspects of two important types of identifications methods, i.e., stochastic realization and prediction error based parameter estimation are studied. The studied computational aspects are the robustness, the accuracy, and the computational costs of the methods. Theoretical analyses and applications to a multivariable pilot scale process, operating under closed loop conditions are investigated.

  13. Multivariate respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Dürichen, R.; Wissel, T.; Ernst, F.; Schlaefer, A.; Schweikard, A.

    2014-10-01

    In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation.

  14. Introduction to multivariate discrimination

    NASA Astrophysics Data System (ADS)

    Kégl, Balázs

    2013-07-01

    Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either

  15. Multivariate volume rendering

    SciTech Connect

    Crawfis, R.A.

    1996-03-01

    This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.

  16. A new self-calibration method applied to TOMS and SBUV backscattered ultraviolet data to determine long-term global ozone change

    SciTech Connect

    Herman, J.R.; Hudson, R.; McPeters, R.; Stolarski, R. ); Ahmad, Z.; Gu, X.Y., Taylor, S.; Wellemeyer, C. )

    1991-04-20

    The currently archived (1989) total ozone mapping spectrometer (TOMS) and solar backscattered ultraviolet (SBUV) total ozone data (version 5) show a global average decrease of about 9.0% from November 1978 to November 1988. This large decrease disagrees with an approximate 3.5% decrease estimated from the ground-based Dobson network. The primary source of disagreement was found to arise from an overestimate of reflectivity change and its incorrect wavelengths dependence for the diffuser plate used when measuring solar irradiance. For total ozone measured by TOMS, a means has been found to use the measured radiance-irradiance ratio from several wavelengths pairs to construct an internally self consistent calibration. The method uses the wavelength dependence of the sensitivity to calibration errors and the requirement that albedo ratios for each wavelength pair yield the same total ozone amounts. Smaller errors in determining spacecraft attitude, synchronization problems with the photon counting electronics, and sea glint contamination of boundary reflectivity data have been corrected or minimized. New climatological low-ozone profiles have been incorporated into the TOMS algorithm that are appropriate for Antarctic ozone hole conditions and other low ozone cases. The combined corrections have led to a new determination of the global average total ozone trend (version 6) as a 2.9 {plus minus} 1.3% decrease over 11 years. Version 6 data are shown to be in agreement within error limits with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii.

  17. A complete procedure for multivariate index-flood model application

    NASA Astrophysics Data System (ADS)

    Requena, Ana Isabel; Chebana, Fateh; Mediero, Luis

    2016-04-01

    Multivariate frequency analyses are needed to study floods due to dependence existing among representative variables of the flood hydrograph. Particularly, multivariate analyses are essential when flood-routing processes significantly attenuate flood peaks, such as in dams and flood management in flood-prone areas. Besides, regional analyses improve at-site quantile estimates obtained at gauged sites, especially when short flow series exist, and provide estimates at ungauged sites where flow records are unavailable. However, very few studies deal simultaneously with both multivariate and regional aspects. This study seeks to introduce a complete procedure to conduct a multivariate regional hydrological frequency analysis (HFA), providing guidelines. The methodology joins recent developments achieved in multivariate and regional HFA, such as copulas, multivariate quantiles and the multivariate index-flood model. The proposed multivariate methodology, focused on the bivariate case, is applied to a case study located in Spain by using hydrograph volume and flood peak observed series. As a result, a set of volume-peak events under a bivariate quantile curve can be obtained for a given return period at a target site, providing flexibility to practitioners to check and decide what the design event for a given purpose should be. In addition, the multivariate regional approach can also be used for obtaining the multivariate distribution of the hydrological variables when the aim is to assess the structure failure for a given return period.

  18. Applying Recovery Biomarkers to Calibrate Self-Report Measures of Energy and Protein in the Hispanic Community Health Study/Study of Latinos.

    PubMed

    Mossavar-Rahmani, Yasmin; Shaw, Pamela A; Wong, William W; Sotres-Alvarez, Daniela; Gellman, Marc D; Van Horn, Linda; Stoutenberg, Mark; Daviglus, Martha L; Wylie-Rosett, Judith; Siega-Riz, Anna Maria; Ou, Fang-Shu; Prentice, Ross L

    2015-06-15

    We investigated measurement error in the self-reported diets of US Hispanics/Latinos, who are prone to obesity and related comorbidities, by background (Central American, Cuban, Dominican, Mexican, Puerto Rican, and South American) in 2010-2012. In 477 participants aged 18-74 years, doubly labeled water and urinary nitrogen were used as objective recovery biomarkers of energy and protein intakes. Self-report was captured from two 24-hour dietary recalls. All measures were repeated in a subsample of 98 individuals. We examined the bias of dietary recalls and their associations with participant characteristics using generalized estimating equations. Energy intake was underestimated by 25.3% (men, 21.8%; women, 27.3%), and protein intake was underestimated by 18.5% (men, 14.7%; women, 20.7%). Protein density was overestimated by 10.7% (men, 11.3%; women, 10.1%). Higher body mass index and Hispanic/Latino background were associated with underestimation of energy (P<0.05). For protein intake, higher body mass index, older age, nonsmoking, Spanish speaking, and Hispanic/Latino background were associated with underestimation (P<0.05). Systematic underreporting of energy and protein intakes and overreporting of protein density were found to vary significantly by Hispanic/Latino background. We developed calibration equations that correct for subject-specific error in reporting that can be used to reduce bias in diet-disease association studies. PMID:25995289

  19. Code Calibration Applied to the TCA High-Lift Model in the 14 x 22 Wind Tunnel (Simulation With and Without Model Post-Mount)

    NASA Technical Reports Server (NTRS)

    Lessard, Wendy B.

    1999-01-01

    The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).

  20. Applying Recovery Biomarkers to Calibrate Self-Report Measures of Energy and Protein in the Hispanic Community Health Study/Study of Latinos

    PubMed Central

    Mossavar-Rahmani, Yasmin; Shaw, Pamela A.; Wong, William W.; Sotres-Alvarez, Daniela; Gellman, Marc D.; Van Horn, Linda; Stoutenberg, Mark; Daviglus, Martha L.; Wylie-Rosett, Judith; Siega-Riz, Anna Maria; Ou, Fang-Shu; Prentice, Ross L.

    2015-01-01

    We investigated measurement error in the self-reported diets of US Hispanics/Latinos, who are prone to obesity and related comorbidities, by background (Central American, Cuban, Dominican, Mexican, Puerto Rican, and South American) in 2010–2012. In 477 participants aged 18–74 years, doubly labeled water and urinary nitrogen were used as objective recovery biomarkers of energy and protein intakes. Self-report was captured from two 24-hour dietary recalls. All measures were repeated in a subsample of 98 individuals. We examined the bias of dietary recalls and their associations with participant characteristics using generalized estimating equations. Energy intake was underestimated by 25.3% (men, 21.8%; women, 27.3%), and protein intake was underestimated by 18.5% (men, 14.7%; women, 20.7%). Protein density was overestimated by 10.7% (men, 11.3%; women, 10.1%). Higher body mass index and Hispanic/Latino background were associated with underestimation of energy (P < 0.05). For protein intake, higher body mass index, older age, nonsmoking, Spanish speaking, and Hispanic/Latino background were associated with underestimation (P < 0.05). Systematic underreporting of energy and protein intakes and overreporting of protein density were found to vary significantly by Hispanic/Latino background. We developed calibration equations that correct for subject-specific error in reporting that can be used to reduce bias in diet-disease association studies. PMID:25995289

  1. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  2. Multivariate Hypergeometric Similarity Measure

    PubMed Central

    Kaddi, Chanchala D.; Parry, R. Mitchell; Wang, May D.

    2016-01-01

    We propose a similarity measure based on the multivariate hypergeometric distribution for the pairwise comparison of images and data vectors. The formulation and performance of the proposed measure are compared with other similarity measures using synthetic data. A method of piecewise approximation is also implemented to facilitate application of the proposed measure to large samples. Example applications of the proposed similarity measure are presented using mass spectrometry imaging data and gene expression microarray data. Results from synthetic and biological data indicate that the proposed measure is capable of providing meaningful discrimination between samples, and that it can be a useful tool for identifying potentially related samples in large-scale biological data sets. PMID:24407308

  3. WFPC2 Pipeline Calibration

    NASA Astrophysics Data System (ADS)

    Burrows, Chris

    2004-03-01

    This document contains a listing of all WFPC2 reference files, grouped by type, that are presently available in the Calibration Data Base (CDB) System, and a summary of how they are used in the calibration of WFPC2 data. A summary memo is kept on STEIS and kept up to date as the reference files change. That memo is intended to inform observers as to the quality of the calibration applied to their data by the PODPS pipeline processing and to provide an aid in selecting appropriate reference files for the re-calibration of WFPC2 observations. The datafiles may be requested by name from the STScI in the same fashion as any other nonproprietary data products.

  4. Anemometer calibrator

    NASA Technical Reports Server (NTRS)

    Bate, T.; Calkins, D. E.; Price, P.; Veikins, O.

    1971-01-01

    Calibrator generates accurate flow velocities over wide range of gas pressure, temperature, and composition. Both pressure and flow velocity can be maintained within 0.25 percent. Instrument is essentially closed loop hydraulic system containing positive displacement drive.

  5. Heavy flavor identification using multivariate analysis at H1

    SciTech Connect

    Pandurovic, Mila; Bozovic-Jelisavcic, Ivanka; Mudrinic, Mihajlo

    2010-01-21

    We discuss b quark identification in deep inelastic scattering of electron on proton at H1 by applying multivariate analysis method. Separation between heavy and light flavors can be further used to extract proton quark content.

  6. A Precipitation Satellite Downscaling & Re-Calibration Routine for TRMM 3B42 and GPM Data Applied to the Tropical Andes

    NASA Astrophysics Data System (ADS)

    Manz, B.; Buytaert, W.; Tobón, C.; Villacis, M.; García, F.

    2014-12-01

    With the imminent release of GPM it is essential for the hydrological user community to improve the spatial resolution of satellite precipitation products (SPPs), also retrospectively of historical time-series. Despite the growing number of applications, to date SPPs have two major weaknesses. Firstly, geosynchronous infrared (IR) SPPs, relying exclusively on cloud elevation/ IR temperature, fail to replicate ground rainfall rates especially for convective rainfall. Secondly, composite SPPs like TRMM include microwave and active radar to overcome this, but the coarse spatial resolution (0.25°) from infrequent orbital sampling often fails to: a) characterize precipitation patterns (especially extremes) in complex topography regions, and b) allow for gauge comparisons with adequate spatial support. This is problematic for satellite-gauge merging and subsequent hydrological modelling applications. We therefore present a new re-calibration and downscaling routine that is applicable to 0.25°/ 3-hrly TRMM 3B42 and Level 3 GPM time-series to generate 1 km estimates. 16 years of instantaneous TRMM radar (TPR) images were evaluated against a unique dataset of over 100 10-min rain gauges from the tropical Andes (Colombia & Ecuador) to develop a spatially distributed error surface. Long-term statistics on occurrence frequency, convective/ stratiform fraction and extreme precipitation probability (Gamma & Generalized Pareto distributions) were computed from TPR at the 1 km scale as well as from TPR and 3B42 at the 0.25° scale. To downscale from 0.25° to 1 km a stochastic generator was used to restrict precipitation occurrence to a fraction of the 1 km pixels within the 0.25° gridcell at every time-step. Regression modelling established a relationship between probability distributions at the 0.25° scale and rainfall amounts were assigned to the retained 1 km pixels by quantile-matching to the gridcell. The approach inherently provides mass conservation of the downscaled

  7. STIS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hulbert, S.; Hodge, P.; Lindler, D.; Shaw, R.; Goudfrooij, P.; Katsanis, R.; Keener, S.; McGrath, M.; Bohlin, R.; Baum, S.

    1997-05-01

    Routine calibration of STIS observations in the HST data pipeline is performed by the CALSTIS task. CALSTIS can: subtract the over-scan region and a bias image from CCD observations; remove cosmic ray features from CCD observations; correct global nonlinearities for MAMA observations; subtract a dark image; and, apply flat field corrections. In the case of spectral data, CALSTIS can also: assign a wavelength to each pixel; apply a heliocentric correction to the wavelengths; convert counts to absolute flux; process the automatically generated spectral calibration lamp observations to improve the wavelength solution; rectify two-dimensional (longslit) spectra; subtract interorder and sky background; and, extract one-dimensional spectra. CALSTIS differs in significant ways from the current HST calibration tasks. The new code is written in ANSI C and makes use of a new C interface to IRAF. The input data, reference data, and output calibrated data are all in FITS format, using IMAGE or BINTABLE extensions. Error estimates are computed and include contributions from the reference images. The entire calibration can be performed by one task, but many steps can also be performed individually.

  8. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  9. Image Calibration

    NASA Technical Reports Server (NTRS)

    Peay, Christopher S.; Palacios, David M.

    2011-01-01

    Calibrate_Image calibrates images obtained from focal plane arrays so that the output image more accurately represents the observed scene. The function takes as input a degraded image along with a flat field image and a dark frame image produced by the focal plane array and outputs a corrected image. The three most prominent sources of image degradation are corrected for: dark current accumulation, gain non-uniformity across the focal plane array, and hot and/or dead pixels in the array. In the corrected output image the dark current is subtracted, the gain variation is equalized, and values for hot and dead pixels are estimated, using bicubic interpolation techniques.

  10. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    PubMed

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. PMID:20645279

  11. Analytical advantages of multivariate data processing. One, two, three, infinity?

    PubMed

    Olivieri, Alejandro C

    2008-08-01

    Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646

  12. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  13. Calibration of a visible polarimeter

    NASA Astrophysics Data System (ADS)

    Gibney, Mark

    2012-06-01

    The calibration of a visible polarimeter is discussed. Calibration coefficients that provide a complete linear characterization of a polarimeter are represented in this paper by the analyzer vector, where sensor response in counts is given by the dot product of the analyzer vector and the incoming Stokes vector. Using the analyzer vector to represent the effect of the sensor on the incoming Stokes vector, we can include elements of the calibration Stokes vector in the fit used to estimate the analyzer vectors/calibration coefficients. This technique allows us to alleviate some of the strict requirements usually levied on the source used to generate the calibration Stokes vectors, such as source temporal stability. Data will be shown that validate the resultant analyzer vectors/calibration coefficients, using a novel technique with a tilted glass plate. A discussion of how these techniques are applied to IR sensors will also be touched on.

  14. Advancing emotion theory with multivariate pattern classification

    PubMed Central

    Kragel, Philip A.; LaBar, Kevin S.

    2016-01-01

    Characterizing how activity in the central and autonomic nervous systems corresponds to distinct emotional states is one of the central goals of affective neuroscience. Despite the ease with which individuals label their own experiences, identifying specific autonomic and neural markers of emotions remains a challenge. Here we explore how multivariate pattern classification approaches offer an advantageous framework for identifying emotion specific biomarkers and for testing predictions of theoretical models of emotion. Based on initial studies using multivariate pattern classification, we suggest that central and autonomic nervous system activity can be reliably decoded into distinct emotional states. Finally, we consider future directions in applying pattern classification to understand the nature of emotion in the nervous system.

  15. Multivariate Bias Correction Procedures for Improving Water Quality Predictions using Mechanistic Models

    NASA Astrophysics Data System (ADS)

    Libera, D.; Arumugam, S.

    2015-12-01

    Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.

  16. A multivariate regression model for detection of fumonisins content in maize from near infrared spectra.

    PubMed

    Giacomo, Della Riccia; Stefania, Del Zotto

    2013-12-15

    Fumonisins are mycotoxins produced by Fusarium species that commonly live in maize. Whereas fungi damage plants, fumonisins cause disease both to cattle breedings and human beings. Law limits set fumonisins tolerable daily intake with respect to several maize based feed and food. Chemical techniques assure the most reliable and accurate measurements, but they are expensive and time consuming. A method based on Near Infrared spectroscopy and multivariate statistical regression is described as a simpler, cheaper and faster alternative. We apply Partial Least Squares with full cross validation. Two models are described, having high correlation of calibration (0.995, 0.998) and of validation (0.908, 0.909), respectively. Description of observed phenomenon is accurate and overfitting is avoided. Screening of contaminated maize with respect to European legal limit of 4 mg kg(-1) should be assured. PMID:23993617

  17. Pattern recognition used to investigate multivariate data in analytical chemistry

    SciTech Connect

    Jurs, P.C.

    1986-06-06

    Pattern recognition and allied multivariate methods provide an approach to the interpretation of the multivariate data often encountered in analytical chemistry. Widely used methods include mapping and display, discriminant development, clustering, and modeling. Each has been applied to a variety of chemical problems, and examples are given. The results of two recent studies are shown, a classification of subjects as normal or cystic fibrosis heterozygotes and simulation of chemical shifts of carbon-13 nuclear magnetic resonance spectra by linear model equations.

  18. Simultaneous determination of Nifuroxazide and Drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis

    NASA Astrophysics Data System (ADS)

    Metwally, Fadia H.

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  19. Simultaneous determination of nifuroxazide and drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis.

    PubMed

    Metwally, Fadia H

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12microgml(-1) of NIF and 2-8microgml(-1) of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method. PMID:17631041

  20. Multivariate streamflow forecasting using independent component analysis

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Sharma, Ashish; Brown, Casey; Lall, Upmanu

    2008-02-01

    Seasonal forecasting of streamflow provides many benefits to society, by improving our ability to plan and adapt to changing water supplies. A common approach to developing these forecasts is to use statistical methods that link a set of predictors representing climate state as it relates to historical streamflow, and then using this model to project streamflow one or more seasons in advance based on current or a projected climate state. We present an approach for forecasting multivariate time series using independent component analysis (ICA) to transform the multivariate data to a set of univariate time series that are mutually independent, thereby allowing for the much broader class of univariate models to provide seasonal forecasts for each transformed series. Uncertainty is incorporated by bootstrapping the error component of each univariate model so that the probability distribution of the errors is maintained. Although all analyses are performed on univariate time series, the spatial dependence of the streamflow is captured by applying the inverse ICA transform to the predicted univariate series. We demonstrate the technique on a multivariate streamflow data set in Colombia, South America, by comparing the results to a range of other commonly used forecasting methods. The results show that the ICA-based technique is significantly better at representing spatial dependence, while not resulting in any loss of ability in capturing temporal dependence. As such, the ICA-based technique would be expected to yield considerable advantages when used in a probabilistic setting to manage large reservoir systems with multiple inflows or data collection points.

  1. Damage detection using multivariate recurrence quantification analysis

    NASA Astrophysics Data System (ADS)

    Nichols, J. M.; Trickey, S. T.; Seaver, M.

    2006-02-01

    Recurrence-quantification analysis (RQA) has emerged as a useful tool for detecting subtle non-stationarities and/or changes in time-series data. Here, we extend the RQA analysis methods to multivariate observations and present a method by which the "length scale" parameter ɛ (the only parameter required for RQA) may be selected. We then apply the technique to the difficult engineering problem of damage detection. The structure considered is a finite element model of a rectangular steel plate where damage is represented as a cut in the plate, starting at one edge and extending from 0% to 25% of the plate width in 5% increments. Time series, recorded at nine separate locations on the structure, are used to reconstruct the phase space of the system's dynamics and subsequently generate the multivariate recurrence (and cross-recurrence) plots. Multivariate RQA is then used to detect damage-induced changes to the structural dynamics. These results are then compared with shifts in the plate's natural frequencies. Two of the RQA-based features are found to be more sensitive to damage than are the plate's frequencies.

  2. Regional dissociated heterochrony in multivariate analysis.

    PubMed

    Mitteroecker, P; Gunz, P; Weber, G W; Bookstein, F L

    2004-12-01

    Heterochrony, the classic framework to study ontogeny and phylogeny, in essence relies on a univariate concept of shape. Though principal component plots of multivariate shape data seem to resemble classical bivariate allometric plots, the language of heterochrony cannot be translated directly into general multivariate methodology. We simulate idealized multivariate ontogenetic trajectories and demonstrate their behavior in principal component plots in shape space and in size-shape space. The concept of "dissociation", which is conventionally regarded as a change in the relationship between shape change and size change, appears to be algebraically the same as regional dissociation - the variation of apparent heterochrony by region. Only if the trajectories of two related species lie along exactly the same path in shape space can the classic terminology of heterochrony apply so that pure dissociation of size change against shape change can be detected. We demonstrate a geometric morphometric approach to these issues using adult and subadult crania of 48 Pan paniscus and 47 P. troglodytes. On each specimen we digitized 47 landmarks and 144 semilandmarks on ridge curves and the external neurocranial surface. The relation between these two species' growth trajectories is too complex for a simple summary in terms of global heterochrony. PMID:15646279

  3. ALTEA calibration

    NASA Astrophysics Data System (ADS)

    Zaconte, V.; Altea Team

    The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to

  4. Local hadron calibration with ATLAS

    NASA Astrophysics Data System (ADS)

    Giovannini, Paola; ATLAS Liquid Argon Calorimeter Group

    2011-04-01

    The method of Local Hadron Calibration is used in ATLAS as one of the two major calibration schemes for the reconstruction of jets and missing transverse energy. The method starts from noise suppressed clusters and corrects them for non-compensation effects and for losses due to noise threshold and dead material. Jets are reconstructed using the calibrated clusters and are then corrected for out of cone effects. The performance of the corrections applied to the calorimeter clusters is tested with detailed GEANT4 information. Results obtained with this procedure are discussed both for single pion simulations and for di-jet simulations. The calibration scheme is validated on data, by comparing the calibrated cluster energy in data with Mote Carlo simulations. Preliminary results obtained with GeV collision data are presented. The agreement between data and Monte Carlo is within 5% for the final cluster scale.

  5. Nested Taylor decomposition in multivariate function decomposition

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2014-12-01

    Fluctuationlessness approximation applied to the remainder term of a Taylor decomposition expressed in integral form is already used in many articles. Some forms of multi-point Taylor expansion also are considered in some articles. This work is somehow a combination these where the Taylor decomposition of a function is taken where the remainder is expressed in integral form. Then the integrand is decomposed to Taylor again, not necessarily around the same point as the first decomposition and a second remainder is obtained. After taking into consideration the necessary change of variables and converting the integration limits to the universal [0;1] interval a multiple integration system formed by a multivariate function is formed. Then it is intended to apply the Fluctuationlessness approximation to each of these integrals one by one and get better results as compared with the single node Taylor decomposition on which the Fluctuationlessness is applied.

  6. Linear and Nonlinear Calibration Methods for Predicting Mechanical Properties of Polypropylene Pellets Using Raman Spectroscopy.

    PubMed

    Banquet-Terán, Julio; Johnson-Restrepo, Boris; Hernández-Morelo, Alveiro; Ropero, Jorge; Fontalvo-Gomez, Miriam; Romañach, Rodolfo J

    2016-07-01

    A nondestructive and faster methodology to quantify mechanical properties of polypropylene (PP) pellets, obtained from an industrial plant, was developed with Raman spectroscopy. Raman spectra data were obtained from several types of samples such as homopolymer PP, random ethylene-propylene copolymer, and impact ethylene-propylene copolymer. Multivariate calibration models were developed by relating the changes in the Raman spectra to mechanical properties determined by ASTM tests (Young's traction modulus, tensile strength at yield, elongation at yield on traction, and flexural modulus at 1% secant). Several strategies were evaluated to build robust models including the use of preprocessing methods (baseline correction, vector normalization, de-trending, and standard normal variate), selecting the best subset of wavelengths to model property response and discarding irrelevant variables by applying genetic algorithm (GA). Linear multivariable models were investigated such as partial least square regression (PLS) and PLS with genetic algorithm (GA-PLS) while nonlinear models were implemented with artificial neural network (ANN) preceded by GA (GA-ANN). The best multivariate calibration models were obtained when a combination of genetic algorithms and artificial neural network were used on Raman spectral data with relative standard errors (%RSE) from 0.17 to 0.41 for training and 0.42 to 0.88% validation data sets. PMID:27287847

  7. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  8. TIME CALIBRATED OSCILLOSCOPE SWEEP

    DOEpatents

    Owren, H.M.; Johnson, B.M.; Smith, V.L.

    1958-04-22

    The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.

  9. Parameter Sensitivity in Multivariate Methods

    ERIC Educational Resources Information Center

    Green, Bert F., Jr.

    1977-01-01

    Interpretation of multivariate models requires knowing how much the fit of the model is impaired by changes in the parameters. The relation of parameter change to loss of goodness of fit can be called parameter sensitivity. Formulas are presented for assessing the sensitivity of multiple regression and principal component weights. (Author/JKS)

  10. Multivariate Model of Infant Competence.

    ERIC Educational Resources Information Center

    Kierscht, Marcia Selland; Vietze, Peter M.

    This paper describes a multivariate model of early infant competence formulated from variables representing infant-environment transaction including: birthweight, habituation index, personality ratings of infant social orientation and task orientation, ratings of maternal responsiveness to infant distress and social signals, and observational…

  11. A "Model" Multivariable Calculus Course.

    ERIC Educational Resources Information Center

    Beckmann, Charlene E.; Schlicker, Steven J.

    1999-01-01

    Describes a rich, investigative approach to multivariable calculus. Introduces a project in which students construct physical models of surfaces that represent real-life applications of their choice. The models, along with student-selected datasets, serve as vehicles to study most of the concepts of the course from both continuous and discrete…

  12. Multichannel hierarchical image classification using multivariate copulas

    NASA Astrophysics Data System (ADS)

    Voisin, Aurélie; Krylov, Vladimir A.; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane

    2012-03-01

    This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered input images. For each class and each input channel, the class-conditional marginal probability density functions are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step, we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.

  13. Fresh Biomass Estimation in Heterogeneous Grassland Using Hyperspectral Measurements and Multivariate Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.

    2014-12-01

    Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.

  14. Extracting the MESA SR4000 calibrations

    NASA Astrophysics Data System (ADS)

    Charleston, Sean A.; Dorrington, Adrian A.; Streeter, Lee; Cree, Michael J.

    2015-05-01

    Time-of-flight range imaging cameras are capable of acquiring depth images of a scene. Some algorithms require these cameras to be run in `raw mode', where any calibrations from the off-the-shelf manufacturers are lost. The calibration of the MESA SR4000 is herein investigated, with an attempt to reconstruct the full calibration. Possession of the factory calibration enables calibrated data to be acquired and manipulated even in "raw mode." This work is motivated by the problem of motion correction, in which the calibration must be separated into component parts to be applied at different stages in the algorithm. There are also other applications, in which multiple frequencies are required, such as multipath interference correction. The other frequencies can be calibrated in a similar way, using the factory calibration as a base. A novel technique for capturing the calibration data is described; a retro-reflector is used on a moving platform, which acts as a point source at a distance, resulting in planar waves on the sensor. A number of calibrations are retrieved from the camera, and are then modelled and compared to the factory calibration. When comparing the factory calibration to both the "raw mode" data, and the calibration described herein, a root mean squared error improvement of 51:3mm was seen, with a standard deviation improvement of 34:9mm.

  15. An uncertain journey around the tails of multivariate hydrological distributions

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco

    2013-10-01

    Moving from univariate to multivariate frequency analysis, this study extends the Klemeš' critique of the widespread belief that the increasingly refined mathematical structures of probability functions increase the accuracy and credibility of the extrapolated upper tails of the fitted distribution models. In particular, we discuss key aspects of multivariate frequency analysis applied to hydrological data such as the selection of multivariate design events (i.e., appropriate subsets or scenarios of multiplets that exhibit the same joint probability to be used in design applications) and the assessment of the corresponding uncertainty. Since these problems are often overlooked or treated separately, and sometimes confused, we attempt to clarify properties, advantages, shortcomings, and reliability of results of frequency analysis. We suggest a selection method of multivariate design events with prescribed joint probability based on simple Monte Carlo simulations that accounts for the uncertainty affecting the inference results and the multivariate extreme quantiles. It is also shown that the exploration of the p-level probability regions of a joint distribution returns a set of events that is a subset of the p-level scenarios resulting from an appropriate assessment of the sampling uncertainty, thus tending to overlook more extreme and potentially dangerous events with the same (uncertain) joint probability. Moreover, a quantitative assessment of the uncertainty of multivariate quantiles is provided by introducing the concept of joint confidence intervals. From an operational point of view, the simulated event sets describing the distribution of the multivariate p-level quantiles can be used to perform multivariate risk analysis under sampling uncertainty. As an example of the practical implications of this study, we analyze two case studies already presented in the literature.

  16. Multivariable PID control by decoupling

    NASA Astrophysics Data System (ADS)

    Garrido, Juan; Vázquez, Francisco; Morilla, Fernando

    2016-04-01

    This paper presents a new methodology to design multivariable proportional-integral-derivative (PID) controllers based on decoupling control. The method is presented for general n × n processes. In the design procedure, an ideal decoupling control with integral action is designed to minimise interactions. It depends on the desired open-loop processes that are specified according to realisability conditions and desired closed-loop performance specifications. These realisability conditions are stated and three common cases to define the open-loop processes are studied and proposed. Then, controller elements are approximated to PID structure. From a practical point of view, the wind-up problem is also considered and a new anti-wind-up scheme for multivariable PID controller is proposed. Comparisons with other works demonstrate the effectiveness of the methodology through the use of several simulation examples and an experimental lab process.

  17. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  18. Muon Energy Calibration of the MINOS Detectors

    SciTech Connect

    Miyagawa, Paul S.

    2004-09-01

    MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by {approx} 10%, which is equivalent to increasing the amount of data by 20%.

  19. Calibration of sound calibrators: an overview

    NASA Astrophysics Data System (ADS)

    Milhomem, T. A. B.; Soares, Z. M. D.

    2016-07-01

    This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.

  20. Multivariate-normality goodness-of-fit tests

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Crutcher, H. L.

    1977-01-01

    Computer program applies chi-square Pearson test to multivariate statistics for application in any field in which data of two or more variables (dimensions) are sampled for statistical purposes. Program handles dimensions two through five, with up to thousand data sets.

  1. Multivariate classification of infrared spectra of cell and tissue samples

    DOEpatents

    Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.

    1997-01-01

    Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.

  2. Univariate Analysis of Multivariate Outcomes in Educational Psychology.

    ERIC Educational Resources Information Center

    Hubble, L. M.

    1984-01-01

    The author examined the prevalence of multiple operational definitions of outcome constructs and an estimate of the incidence of Type I error rates when univariate procedures were applied to multiple variables in educational psychology. Multiple operational definitions of constructs were advocated and wider use of multivariate analysis was…

  3. Calibration age and quartet divergence date estimation.

    PubMed

    Brochu, Christopher A

    2004-06-01

    The date of a single divergence point--between living alligators and crocodiles--was estimated with quartet dating using calibrations of widely divergent ages. For five mitochondrial sequence datasets, there is a clear relationship between calibration age and quartet estimate--quartets based on two relatively recent calibrations support younger divergence estimates than do quartets based on two older calibrations. Some of the estimates supported by young quartets are impossibly young and exclude the first appearance of the group in the fossil record as too old. The older estimates--those based on two relatively old calibrations--may be overestimates, and those based on one old and one recent calibration support divergence estimates very close to fossil data. This suggests that quartet dating methods may be most effective when calibrations are applied from different parts of a clade's history. PMID:15266985

  4. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  5. Steady-state decoupling and design of linear multivariable systems

    NASA Technical Reports Server (NTRS)

    Thaler, G. J.

    1974-01-01

    A constructive criterion for decoupling the steady states of a linear time-invariant multivariable system is presented. This criterion consists of a set of inequalities which, when satisfied, will cause the steady states of a system to be decoupled. Stability analysis and a new design technique for such systems are given. A new and simple connection between single-loop and multivariable cases is found. These results are then applied to the compensation design for NASA STOL C-8A aircraft. Both steady-state decoupling and stability are justified through computer simulations.

  6. Multivariable quadratic synthesis of an advanced turbofan engine controller

    NASA Technical Reports Server (NTRS)

    Dehoff, R. L.; Hall, W. E., Jr.

    1978-01-01

    A digital controller for an advanced turbofan engine utilizing multivariate feedback is described. The theoretical background of locally linearized control synthesis is reviewed briefly. The application of linear quadratic regulator techniques to the practical control problem is presented. The design procedure has been applied to the F100 turbofan engine, and details of the structure of this system are explained. Selected results from simulations of the engine and controller are utilized to illustrate the operation of the system. It is shown that the general multivariable design procedure will produce practical and implementable controllers for modern, high-performance turbine engines.

  7. Design of feedforward controllers for multivariable plants

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    Simple methods for the design of feedforward controllers to achieve steady-state disturbance rejection and command tracking in stable multivariable plants are developed in this paper. The controllers are represented by simple and low-order transfer functions and are not based on reconstruction of the states of the commands and disturbances. For unstable plants, it is shown that the present method can be applied directly when an additional feedback controller is employed to stabilize the plant. The feedback and feedforward controllers do not affect each other and can be designed independently based on the open-loop plant to achieve stability, disturbance rejection and command tracking, respectivley. Numerical examples are given for illustration.

  8. Bayesian Local Contamination Models for Multivariate Outliers

    PubMed Central

    Page, Garritt L.; Dunson, David B.

    2013-01-01

    In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465

  9. Software For Multivariate Bayesian Classification

    NASA Technical Reports Server (NTRS)

    Saul, Ronald; Laird, Philip; Shelton, Robert

    1996-01-01

    PHD general-purpose classifier computer program. Uses Bayesian methods to classify vectors of real numbers, based on combination of statistical techniques that include multivariate density estimation, Parzen density kernels, and EM (Expectation Maximization) algorithm. By means of simple graphical interface, user trains classifier to recognize two or more classes of data and then use it to identify new data. Written in ANSI C for Unix systems and optimized for online classification applications. Embedded in another program, or runs by itself using simple graphical-user-interface. Online help files makes program easy to use.

  10. Scalable Software for Multivariate Integration on Hybrid Platforms

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kapenga, J.; Olagbemi, O.

    2015-09-01

    The paper describes the software infrastructure of the PARINT package for multivariate numerical integration, layered over a hybrid parallel environment with distributed memory computations (on MPI). The parallel problem distribution is typically performed on the region level in the adaptive partitioning procedure. Our objective has been to provide the end-user with state of the art problem solving power packaged as portable software. We will give test results of the multivariate ParInt engine, with significant speedups for a set of 3-loop Feynman integrals. An extrapolation with respect to the dimensional regularization parameter (ε) is applied to sequences of multivariate ParInt results Q(ε) to obtain the leading asymptotic expansion coefficients as ε → 0. This paper further introduces a novel method for a parallel computation of the Q(ε) sequence as the components of the integral of a vector function.

  11. Neuro-sliding mode multivariable control of a powered wheelchair.

    PubMed

    Nguyen, Nghia; Nguyen, Hung T; Su, Steven

    2008-01-01

    This paper proposes a neuro-sliding mode multivariable control approach for the control of a powered wheelchair system. In the first stage, a systematic decoupling technique is applied to the wheelchair system in order to reduce the multivariable control problem into two independent scalar control problems. Then two Neuro-Sliding Mode Controllers (NSMCs) are designed for these independent subsystems to guarantee system robustness under model uncertainties and unknown external disturbances. Both off-line and on-line trainings are involved in the second stage. Real-time experimental results confirm that robust performance for this multivariable wheelchair control system under model uncertainties and unknown external disturbances can indeed be achieved. PMID:19163456

  12. Bayesian Calibration of Microsimulation Models.

    PubMed

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models. PMID:20076767

  13. Bayesian Calibration of Microsimulation Models

    PubMed Central

    Rutter, Carolyn M.; Miglioretti, Diana L.; Savarino, James E.

    2009-01-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models. PMID:20076767

  14. Multivariate image processing technique for noninvasive glucose sensing

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.; Cameron, Brent D.

    2010-02-01

    A potential noninvasive glucose sensing technique was investigated for application towards in vivo glucose monitoring for individuals afflicted with diabetes mellitus. Three dimensional ray tracing simulations using a realistic iris pattern integrated into an advanced human eye model are reported for physiological glucose concentrations ranging between 0 to 500 mg/dL. The anterior chamber of the human eye contains a clear fluid known as the aqueous humor. The optical refractive index of the aqueous humor varies on the order of 1.5x10-4 for a change in glucose concentration of 100 mg/dL. The simulation data was analyzed with a developed multivariate chemometrics procedure that utilizes iris-based images to form a calibration model. Results from these simulations show considerable potential for use of the developed method in the prediction of glucose. For further demonstration, an in vitro eye model was developed to validate the computer based modeling technique. In these experiments, a realistic iris pattern was placed in an analog eye model in which the glucose concentration within the fluid representing the aqueous humor was varied. A series of high resolution digital images were acquired using an optical imaging system. These images were then used to form an in vitro calibration model utilizing the same multivariate chemometric technique demonstrated in the 3-D optical simulations. In general, the developed method exhibits considerable applicability towards its use as an in vivo platform for the noninvasive monitoring of physiological glucose concentration.

  15. Rice Seed Cultivar Identification Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    PubMed Central

    Kong, Wenwen; Zhang, Chu; Liu, Fei; Nie, Pengcheng; He, Yong

    2013-01-01

    A near-infrared (NIR) hyperspectral imaging system was developed in this study. NIR hyperspectral imaging combined with multivariate data analysis was applied to identify rice seed cultivars. Spectral data was exacted from hyperspectral images. Along with Partial Least Squares Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogy (SIMCA), K-Nearest Neighbor Algorithm (KNN) and Support Vector Machine (SVM), a novel machine learning algorithm called Random Forest (RF) was applied in this study. Spectra from 1,039 nm to 1,612 nm were used as full spectra to build classification models. PLS-DA and KNN models obtained over 80% classification accuracy, and SIMCA, SVM and RF models obtained 100% classification accuracy in both the calibration and prediction set. Twelve optimal wavelengths were selected by weighted regression coefficients of the PLS-DA model. Based on optimal wavelengths, PLS-DA, KNN, SVM and RF models were built. All optimal wavelengths-based models (except PLS-DA) produced classification rates over 80%. The performances of full spectra-based models were better than optimal wavelengths-based models. The overall results indicated that hyperspectral imaging could be used for rice seed cultivar identification, and RF is an effective classification technique. PMID:23857260

  16. Image based autodocking without calibration

    SciTech Connect

    Sutanto, H.; Sharma, R.; Varma, V.

    1997-03-01

    The calibration requirements for visual servoing can make it difficult to apply in many real-world situations. One approach to image-based visual servoing without calibration is to dynamically estimate the image Jacobian and use it as the basis for control. However, with the normal motion of a robot toward the goal, the estimation of the image Jacobian deteriorates over time. The authors propose the use of additional exploratory motion to considerably improve the estimation of the image Jacobian. They study the role of such exploratory motion in a visual servoing task. Simulations and experiments with a 6-DOF robot are used to verify the practical feasibility of the approach.

  17. Multivariable Burchnall-Chaundy theory.

    PubMed

    Previato, Emma

    2008-03-28

    Burchnall & Chaundy (Burchnall & Chaundy 1928 Proc. R. Soc. A 118, 557-583) classified the (rank 1) commutative subalgebras of the algebra of ordinary differential operators. To date, there is no such result for several variables. This paper presents the problem and the current state of the knowledge, together with an interpretation in differential Galois theory. It is known that the spectral variety of a multivariable commutative ring will not be associated to a KP-type hierarchy of deformations, but examples of related integrable equations were produced and are reviewed. Moreover, such an algebro-geometric interpretation is made to fit into A.N. Parshin's newer theory of commuting rings of partial pseudodifferential operators and KP-type hierarchies which uses higher local fields. PMID:17588865

  18. Assessing calibration of prognostic risk scores.

    PubMed

    Crowson, Cynthia S; Atkinson, Elizabeth J; Therneau, Terry M

    2016-08-01

    Current methods used to assess calibration are limited, particularly in the assessment of prognostic models. Methods for testing and visualizing calibration (e.g. the Hosmer-Lemeshow test and calibration slope) have been well thought out in the binary regression setting. However, extension of these methods to Cox models is less well known and could be improved. We describe a model-based framework for the assessment of calibration in the binary setting that provides natural extensions to the survival data setting. We show that Poisson regression models can be used to easily assess calibration in prognostic models. In addition, we show that a calibration test suggested for use in survival data has poor performance. Finally, we apply these methods to the problem of external validation of a risk score developed for the general population when assessed in a special patient population (i.e. patients with particular comorbidities, such as rheumatoid arthritis). PMID:23907781

  19. Development and validation of a dynamic range-extended LC-MS/MS multi-analyte method for 11 different postmortem matrices for redistribution studies applying solvent calibration and additional (13)C isotope monitoring.

    PubMed

    Staeheli, Sandra N; Poetzsch, Michael; Kraemer, Thomas; Steuer, Andrea E

    2015-11-01

    Postmortem redistribution (PMR) is one of numerous problems in postmortem toxicology making correct interpretation of measured drug concentrations difficult or even impossible. Time-dependent PMR in peripheral blood and especially in tissue samples is still under-explored. For further investigation, an easy applicable method for the simultaneous quantitation of over 80 forensically relevant compounds in 11 different postmortem matrices should be developed and validated overcoming the challenges of high inter-matrix and intra-matrix concentration variances. Biopsy samples (20 mg) or body fluids (20 μL) were spiked with an analyte mix and deuterated internal standards, extracted by liquid-liquid extraction, and analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS). For highest applicability, an easy solvent calibration was used. Furthermore, time-consuming dilution of high concentration samples showing detector saturation was circumvented by two overlapping calibration curves using (12)C isotope monitoring for low concentrations and (13)C isotopes for high concentration, respectively. The method was validated according to international guidelines with modifications. Matrix effects and extraction efficiency were strongly matrix and analyte dependent. In general, brain and adipose tissue produced the highest matrix effects, whereas cerebrospinal fluid showed the least matrix effects. Accuracy and precision results were rather matrix independent with some exceptions. Despite using an external solvent calibration, the accuracy requirements were fulfilled for 66 to 81 % of the 83 analytes. Depending on the matrix, 75-93 % of the analytes showed intra-day precisions at <20 %. (12)C and (13)C calibrations gave comparable results and proved to be a useful tool in expanding the dynamic range. PMID:26396081

  20. Hydraulic Calibrator for Strain-Gauge Balances

    NASA Technical Reports Server (NTRS)

    Skelly, Kenneth; Ballard, John

    1987-01-01

    Instrument for calibrating strain-gauge balances uses hydraulic actuators and load cells. Eliminates effects of nonparallelism, nonperpendicularity, and changes of cable directions upon vector sums of applied forces. Errors due to cable stretching, pulley friction, and weight inaccuracy also eliminated. New instrument rugged and transportable. Set up quickly. Developed to apply known loads to wind-tunnel models with encapsulated strain-gauge balances, also adapted for use in calibrating dynamometers, load sensors on machinery and laboratory instruments.

  1. Slab coupled optical fiber sensor calibration

    NASA Astrophysics Data System (ADS)

    Whitaker, B.; Noren, J.; Chadderdon, S.; Wang, W.; Forber, R.; Selfridge, R.; Schultz, S.

    2013-02-01

    This paper presents a method for calibrating slab coupled optical fiber sensors (SCOS). An automated system is presented for selecting the optimal laser wavelength for use in SCOS interrogation. The wavelength calibration technique uses a computer sound card for both the creation of the applied electric field and the signal detection. The method used to determine the ratio between the measured SCOS signal and the applied electric field is also described along with a demonstration of the calibrated SCOS involving measuring the dielectric breakdown of air.

  2. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    SciTech Connect

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.; Yueh, Fang-Yu; Singh, Jagdish P.

    2011-09-07

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using the leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.

  3. Multivariable control altitude demonstration on the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Dehoff, R. L.; Hackney, R. D.

    1979-01-01

    The control system designed under the Multivariable Control Synthesis (MVCS) program for the F100 turbofan engine is described. The MVCS program, applied the linear quadratic regulator (LQR) synthesis methods in the design of a multivariable engine control system to obtain enhanced performance from cross-coupled controls, maximum use of engine variable geometry, and a systematic design procedure that can be applied efficiently to new engine systems. Basic components of the control system, a reference value generator for deriving a desired equilibrium state and an approximate control vector, a transition model to produce compatible reference point trajectories during gross transients, gain schedules for producing feedback terms appropriate to the flight condition, and integral switching logic to produce acceptable steady-state performance without engine operating limit exceedance are described and the details of the F100 implementation presented. The engine altitude test phase of the MVCS program, and engine responses in a variety of test operating points and power transitions are presented.

  4. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  5. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    NASA Astrophysics Data System (ADS)

    Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin

    2016-09-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.

  6. Multivariate Time Series Similarity Searching

    PubMed Central

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  7. Multivariate time series similarity searching.

    PubMed

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  8. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  9. Output feedback for linear multivariable systems with parameter uncertainty.

    NASA Technical Reports Server (NTRS)

    Basuthakur, S.; Knapp, C. H.

    1973-01-01

    A minimax design method is applied to the problem of obtaining an acceptable output feedback matrix for linear multivariable systems with parameter uncertainty. The result is a set of nonlinear matrix equations (similar to those obtained by Levine and Athans (1970)), which must be solved for the feedback matrix. An example illustrates the technique and the fact that better results are achieved for large parameter variation than with a purely nominal design.

  10. Estimating the decomposition of predictive information in multivariate systems.

    PubMed

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep. PMID:25871169

  11. Estimating the decomposition of predictive information in multivariate systems

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  12. Multivariate pluvial flood damage models

    SciTech Connect

    Van Ootegem, Luc; Verhofstadt, Elsy; Van Herck, Kristine; Creten, Tom

    2015-09-15

    Depth–damage-functions, relating the monetary flood damage to the depth of the inundation, are commonly used in the case of fluvial floods (floods caused by a river overflowing). We construct four multivariate damage models for pluvial floods (caused by extreme rainfall) by differentiating on the one hand between ground floor floods and basement floods and on the other hand between damage to residential buildings and damage to housing contents. We do not only take into account the effect of flood-depth on damage, but also incorporate the effects of non-hazard indicators (building characteristics, behavioural indicators and socio-economic variables). By using a Tobit-estimation technique on identified victims of pluvial floods in Flanders (Belgium), we take into account the effect of cases of reported zero damage. Our results show that the flood depth is an important predictor of damage, but with a diverging impact between ground floor floods and basement floods. Also non-hazard indicators are important. For example being aware of the risk just before the water enters the building reduces content damage considerably, underlining the importance of warning systems and policy in this case of pluvial floods. - Highlights: • Prediction of damage of pluvial floods using also non-hazard information • We include ‘no damage cases’ using a Tobit model. • The damage of flood depth is stronger for ground floor than for basement floods. • Non-hazard indicators are especially important for content damage. • Potential gain of policies that increase awareness of flood risks.

  13. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  14. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    SciTech Connect

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  15. Calibration or inverse regression: Which is appropriate for crop surveys using LANDSAT data?

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Houston, A. G.

    1984-01-01

    Calibration and inverse regression estimators of crop proportions are investigated where the auxiliary variable is obtained from binary classification of multivariate LANDSAT data. The appropriate model relating classifier proportions and ground observed proportions for a given crop type is the calibration model. Under this model the inverse regression estimator is superior to the calibration estimator in estimating the crop acreage or proportion for a region of interest.

  16. Residual gas analyzer calibration

    NASA Technical Reports Server (NTRS)

    Lilienkamp, R. H.

    1972-01-01

    A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.

  17. Apparatus and system for multivariate spectral analysis

    DOEpatents

    Keenan, Michael R.; Kotula, Paul G.

    2003-06-24

    An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.

  18. Implementation Challenges for Multivariable Control: What You Did Not Learn in School

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay

    2008-01-01

    Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation.

  19. Bioharness™ Multivariable Monitoring Device: Part. I: Validity

    PubMed Central

    Johnstone, James A.; Ford, Paul A.; Hughes, Gerwyn; Watson, Tim; Garrett, Andrew T.

    2012-01-01

    The Bioharness™ monitoring system may provide physiological information on human performance but there is limited information on its validity. The objective of this study was to assess the validity of all 5 Bioharness™ variables using a laboratory based treadmill protocol. 22 healthy males participated. Heart rate (HR), Breathing Frequency (BF) and Accelerometry (ACC) precision were assessed during a discontinuous incremental (0-12 km·h-1) treadmill protocol. Infra-red skin temperature (ST) was assessed during a 45 min-1 sub-maximal cycle ergometer test, completed twice, with environmental temperature controlled at 20 ± 0.1 °C and 30 ± 0.1 °C. Posture (P) was assessed using a tilt table moved through 160°. Adopted precision of measurement devices were; HR: Polar T31 (Polar Electro), BF: Spirometer (Cortex Metalyser), ACC: Oxygen expenditure (Cortex Metalyser), ST: Skin thermistors (Grant Instruments), P:Goniometer (Leighton Flexometer). Strong relationships (r = .89 to .99, p < 0.01) were reported for HR, BF, ACC and P. Limits of agreement identified differences in HR (-3.05 ± 32.20 b·min-1), BF (-3.46 ± 43.70 br·min-1) and P (0.20 ± 2.62°). ST established a moderate relationships (-0.61 ± 1.98 °C; r = 0.76, p < 0.01). Higher velocities on the treadmill decreased the precision of measurement, especially HR and BF. Global results suggest that the BioharressTM is a valid multivariable monitoring device within the laboratory environment. Key pointsDifferent levels of precision exist for each variable in the Bioharness™ (Version 1) multi-variable monitoring deviceAccelerometry and posture variables presented the most precise dataData from the heart rate and breathing frequency variable decrease in precision at velocities ≥ 10 km·h-1Clear understanding of the limitations of new applied monitoring technology is required before it is used by the exercise scientist PMID:24149346

  20. Quantitative characterization of lignocellulosic biomass using surrogate mixtures and multivariate techniques.

    PubMed

    Krasznai, Daniel J; Champagne, Pascale; Cunningham, Michael F

    2012-04-01

    PLS regression models were developed using mixtures of cellulose, xylan, and lignin in a ternary mixture experimental design for multivariate model calibration. Mid-infrared spectra of these representative samples were recorded using Attenuated Total Reflectance (ATR) Fourier Transform Infrared (FT-IR) spectroscopy and regressed against their known composition using Partial Least Squares (PLSs) multivariate techniques. The regression models were cross-validated and then used to predict the unknown compositions of two Arabidopsis cultivars, B10 and C10. The effect of various data preprocessing techniques on the final predictive ability of the PLS regression models was also evaluated. The predicted compositions of B10 and C10 by the PLS regression model after second derivative data preprocessing were similar to the results provided by a third-party analysis. This study suggests that mixture designs could be used as calibration standards in PLS regression for the compositional analysis of lignocellulosic materials if the infrared data is appropriately preprocessed. PMID:22342087

  1. Radiometric Calibration of Osmi Imagery Using Solar Calibration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Han; Kim, Yong-Seung

    2000-12-01

    OSMI (Ocean Scanning Multi-Spectral Imager) raw image data (Level 0) were acquired and radiometrically corrected. We have applied two methods, using solar & dark calibration data from OSMI sensor and comparing with the SeaWiFS data, to the radiometric correction of OSMI raw image data. First, we could get the values of the gain and the offset for each pixel and each band from comparing the solar & dark calibration data with the solar input radiance values, calculated from the transmittance, BRDF (Bidirectional Reflectance Distribution Function) and the solar incidence angle (¥â,¥è) of OSMI sensor. Applying this calibration data to OSMI raw image data, we got the two odd results, the lower value of the radiometric corrected image data than the expected value, and the Venetian Blind Effect in the radiometric corrected image data. Second, we could get the reasonable results from comparing OSMI raw image data with the SeaWiFS data, and get a new problem of OSMI sensor.

  2. A method for designing robust multivariable feedback systems

    NASA Technical Reports Server (NTRS)

    Milich, David Albert; Athans, Michael; Valavani, Lena; Stein, Gunter

    1988-01-01

    A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the Causality Recovery Methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.

  3. A method for designing robust multivariable feedback systems

    NASA Technical Reports Server (NTRS)

    Milich, David A.; Athans, Michael; Valavani, Lena; Stein, Gunter

    1988-01-01

    A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the causality recovery methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.

  4. Multivariable control altitude demonstration on the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Dehoff, R. L.; Hackney, R. D.

    1979-01-01

    The F100 Multivariable control synthesis (MVCS) program, was aimed at demonstrating the benefits of LGR synthesis theory in the design of a multivariable engine control system for operation throughout the flight envelope. The advantages of such procedures include: (1) enhanced performance from cross-coupled controls, (2) maximum use of engine variable geometry, and (3) a systematic design procedure that can be applied efficiently to new engine systems. The control system designed, under the MVCS program, for the Pratt & Whitney F100 turbofan engine is described. Basic components of the control include: (1) a reference value generator for deriving a desired equilibrium state and an approximate control vector, (2) a transition model to produce compatible reference point trajectories during gross transients, (3) gain schedules for producing feedback terms appropriate to the flight condition, and (4) integral switching logic to produce acceptable steady-state performance without engine operating limit exceedance.

  5. Conventional univariate versus multivariate spectrophotometric assisted techniques for simultaneous determination of perindopril arginin and amlodipine besylate in presence of their degradation products.

    PubMed

    Hegazy, Maha A; Abbas, Samah S; Zaazaa, Hala E; Essam, Hebatallah M

    2015-01-01

    The resolving power of spectrophotometric assisted mathematical techniques were demonstrated for the simultaneous determination of perindopril arginin (PER) and amlodipine besylate (AML) in presence of their degradation products. The conventional univariate methods include the absorptivity factor method (AFM) and absorption correction method (ACM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products. In both methods, amlodipine was determined directly at 360 nm in the concentration range of 8-28 μg mL(-1), on the other hand perindopril was determined by AFM at 222.2 nm and by ACM at 208 nm in the concentration range of 10-70 μg mL(-1). Moreover, the applied multivariate calibration methods were able for the determination of perindopril and amlodipine in presence of their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 19 synthetic samples in the concentration ranges of 60-100 μg mL(-1) perindopril and 20-40 μg mL(-1) amlodipine. Commercially available tablet formulations were successfully analysed using the developed methods without interference from other dosage form additives except PLS model, which failed to determine both drugs in their pharmaceutical dosage form. PMID:26123511

  6. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  7. Method of Calibrating a Force Balance

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)

    2015-01-01

    A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.

  8. Radiometer Calibration and Characterization

    Energy Science and Technology Software Center (ESTSC)

    1994-12-31

    The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating solar radiometers using techniques that reduce measurement uncertainty and better characterize a radiometer’s response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instrument’s responsivity.

  9. CLUSTERING CRITERIA AND MULTIVARIATE NORMAL MIXTURES

    EPA Science Inventory

    New clustering criteria for use when a mixture of multivariate normal distributions is an appropriate model are presented. They are derived from maximum likelihood and Bayesian approaches corresponding to different assumptions about the covariance matrices of the mixture componen...

  10. A Course in... Multivariable Control Methods.

    ERIC Educational Resources Information Center

    Deshpande, Pradeep B.

    1988-01-01

    Describes an engineering course for graduate study in process control. Lists four major topics: interaction analysis, multiloop controller design, decoupling, and multivariable control strategies. Suggests a course outline and gives information about each topic. (MVL)

  11. LWIR polarimeter calibration

    NASA Astrophysics Data System (ADS)

    Blumer, Robert V.; Miller, Miranda A.; Howe, James D.; Stevens, Mark A.

    2002-01-01

    Performance reported efforts to calibrate a MWIR imaging polarimeter met with moderate success. Recent efforts to calibrate a LWIR sensor using a different technique have been much more fruitful. For our sensor, which is based on a rotating retarder, we have improved system calibration substantially be including nonuniformity correction at all measurement positions of the retarder in our polarization data analysis. This technique can account for effects such as spurious optical reflections within a camera system that had been masquerading as false polarization in our previous data analysis methodology. Our techniques will be described and our calibration results will be quantified. Data from field-testing will be presented.

  12. The Science of Calibration

    NASA Astrophysics Data System (ADS)

    Kent, S. M.

    2016-05-01

    This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.

  13. The COS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hodge, Philip E.; Keyes, C.; Kaiser, M.

    2007-12-01

    The COS calibration pipeline (CALCOS) includes three main components: basic calibration, wavelength calibration, and spectral extraction. Calibration of modes using the far ultraviolet (FUV) and near ultraviolet (NUV) detectors share a common structure, although the individual reference files differ and there are some additional steps for the FUV channel. The pipeline is designed to calibrate data acquired in either ACCUM or time-tag mode. The basic calibration includes pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. Wavelength calibration can be done either by using separate lamp exposures or by taking several short lamp exposures concurrently with a science exposure. For time-tag data, the latter mode ("tagflash") will allow better correction of potential drift of the spectrum on the detector. One-dimensional spectra will be extracted and saved in a FITS binary table. Separate columns will be used for the flux-calibrated spectrum, error estimate, and the associated wavelengths. CALCOS is written in Python, with some functions in C. It is similar in style to other HST pipeline code in that it uses an association table to specify which files to be included, and the calibration steps to be performed and the reference files to use are specified by header keywords. Currently, in conjunction with the Instrument Definition Team (led by J. Green), the ground-based reference files are being refined, delivered, and tested with the pipeline.

  14. Laser interferometer calibration station

    NASA Astrophysics Data System (ADS)

    Campolmi, R. W.; Krupski, S. J.

    1981-10-01

    The laser interferometer is a versatile tool, used for calibration over both long and short distances. It is considered traceable to the National Bureau of Standards. The system developed under this project was to be capable of providing for the calibration of many types of small linear measurement devices. The logistics of the original concept of one location for calibration of all mics, calipers, etc. at a large manufacturing facility proved unworkable. The equipment was instead used for the calibration of the large machines used to manufacture cannon tubes.

  15. Multivariate data analysis of proteome data.

    PubMed

    Engkilde, Kåre; Jacobsen, Susanne; Søndergaard, Ib

    2007-01-01

    We present the background for multivariate data analysis on proteomics data with a hands-on section on how to transfer data between different software packages. The techniques can also be used for other biological and biochemical problems in which structures have to be found in a large amount of data. Digitalization of the 2D gels, analysis using image processing software, transfer of data, multivariate data analysis, interpretation of the results, and finally we return to biology. PMID:17093312

  16. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692

  17. Multivariate Longitudinal Analysis with Bivariate Correlation Test

    PubMed Central

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692

  18. Toward Millimagnitude Photometric Calibration (Abstract)

    NASA Astrophysics Data System (ADS)

    Dose, E.

    2014-12-01

    (Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.

  19. Meteorological Sensor Calibration Facility

    NASA Technical Reports Server (NTRS)

    Schmidlin, F. J.

    1988-01-01

    The meteorological sensor calibration facility is designed to test and assess radiosonde measurement quality through actual flights in the atmosphere. United States radiosonde temperature measurements are deficient in that they require correction for errors introduced by long- and short-wave radiation. The effect of not applying corrections results in a large bias between day time and night time measurements. This day/night bias has serious implications for users of radiosonde data, of which NASA is one. The derivation of corrections for the U.S. radiosonde is quite important. Determination of corrections depends on solving the heat transfer equation of the thermistor using laboratory measurements of the emissivity and absorptivity of the thermistor coating. The U.S. radiosonde observations from the World Meteorological Organization International Radiosonde Intercomparison were used as the data base to test whether the day/night height bias can be removed. Twenty-five noon time and 26 night time observations were used. Corrected temperatures were used to calculate new geopotentials. Day/night bias in the geopotentials decreased significantly when corrections were introduced. Some testing of thermal lag attendant with the standard carbon hygristor took place. Two radiosondes with small bead thermistors imbedded in the hygristor were flown. Detailed analysis was not accomplished; however, cursory examination of the data showed that the hygristor is at a higher temperature than the external thermistor indicates.

  20. BXS Re-calibration

    SciTech Connect

    Welch, J; ,

    2010-11-24

    indicated that the vacuum chamber was in fact in the proper position with respect to the magnet - not 19 mm off to one side - so the former possibility was discounted. Review of the Fiducial Report and an interview with Keith Caban convinced me that there was no error in the coordinate system used for magnet measurements. I went and interviewed Andrew Fischer who did the magnetic measurements of BXS. He had extensive records, including photographs of the setups and could quickly answer quite detailed questions about how the measurement was done. Before the interview, I had a suspicion there might have been a sign flip in the x coordinate which because of the wedge would result in the wrong path length and a miscalibration. Andrew was able to pin-point how this could have happened and later confirmed it by looking an measurement data from the BXG magnet done just after BXS and comparing photographs. It turned out that the sign of the horizontal stage travel that drives the measurement wire was opposite that of the x coordinate in the Traveler, and the sign difference wasn't applied to the data. The origin x = 0 was set up correctly, but the wire moved in the opposite direction to what was expected, just as if the arc had been flipped over about the origin. To quantitatively confirm that this was the cause of the observed difference in calibration I used the 'grid data', which was taken with a Hall probe on the BXS magnet originally to measure the FINT (focusing effect) term, and combined it with the Hall probe data taken on the flipped trajectory, and performed the field integral on a path that should give the same result as the design path. This is best illustrated in Figure 2. The integration path is coincident with the desired path from the pivot points (x = 0) outward. Between the pivot points the integration path is a mirror image of the design path, but because the magnet is fairly uniform, for this portion it gives the same result. Most of the calibration error

  1. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  2. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  3. Calibration facility safety plan

    NASA Technical Reports Server (NTRS)

    Fastie, W. G.

    1971-01-01

    A set of requirements is presented to insure the highest practical standard of safety for the Apollo 17 Calibration Facility in terms of identifying all critical or catastrophic type hazard areas. Plans for either counteracting or eliminating these areas are presented. All functional operations in calibrating the ultraviolet spectrometer and the testing of its components are described.

  4. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  5. Sandia WIPP calibration traceability

    SciTech Connect

    Schuhen, M.D.; Dean, T.A.

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  6. Assessment of opacimeter calibration on kraft pulp mills

    NASA Astrophysics Data System (ADS)

    Gomes, Joa˜o. F. P.

    This paper describes the methodology and specific techniques for calibrating automatic on-line industrial emission analysers, specifically equipments that measure total suspended dust installed in pulp mills within the scope of Portuguese Regulation No. 286/93 on air quality. The calibration of opacimeters is a multi-parameter relationship instead of the bidimensional calibration which is used in industrial practice. For a stationary source from a pulp mill such as the recovery boiler stack, which is subjected to significant variations, the effects of parameters such as the humidity and gas temperature, deviations of isokinetism, size range of particles and characteristic transmittance of equipment are analysed. The multivariable analysis of a considerable set of data leads to an estimate of about 98% of equipment transmittance over the other parameters with a level of significance greater than 0.99 which is a validation of the bidimensional practical calibrations.

  7. Multivariate Analysis of Ladle Vibration

    NASA Astrophysics Data System (ADS)

    Yenus, Jaefer; Brooks, Geoffrey; Dunn, Michelle

    2016-05-01

    The homogeneity of composition and uniformity of temperature of the steel melt before it is transferred to the tundish are crucial in making high-quality steel product. The homogenization process is performed by stirring the melt using inert gas in ladles. Continuous monitoring of this process is important to make sure the action of stirring is constant throughout the ladle. Currently, the stirring process is monitored by process operators who largely rely on visual and acoustic phenomena from the ladle. However, due to lack of measurable signals, the accuracy and suitability of this manual monitoring are problematic. The actual flow of argon gas to the ladle may not be same as the flow gage reading due to leakage along the gas line components. As a result, the actual degree of stirring may not be correctly known. Various researchers have used one-dimensional vibration, and sound and image signals measured from the ladle to predict the degree of stirring inside. They developed online sensors which are indeed to monitor the online stirring phenomena. In this investigation, triaxial vibration signals have been measured from a cold water model which is a model of an industrial ladle. Three flow rate ranges and varying bath heights were used to collect vibration signals. The Fast Fourier Transform was applied to the dataset before it has been analyzed using principal component analysis (PCA) and partial least squares (PLS). PCA was used to unveil the structure in the experimental data. PLS was mainly applied to predict the stirring from the vibration response. It was found that for each flow rate range considered in this study, the informative signals reside in different frequency ranges. The first latent variables in these frequency ranges explain more than 95 pct of the variation in the stirring process for the entire single layer and the double layer data collected from the cold model. PLS analysis in these identified frequency ranges demonstrated that the latent

  8. Multivariate Analysis of Ladle Vibration

    NASA Astrophysics Data System (ADS)

    Yenus, Jaefer; Brooks, Geoffrey; Dunn, Michelle

    2016-08-01

    The homogeneity of composition and uniformity of temperature of the steel melt before it is transferred to the tundish are crucial in making high-quality steel product. The homogenization process is performed by stirring the melt using inert gas in ladles. Continuous monitoring of this process is important to make sure the action of stirring is constant throughout the ladle. Currently, the stirring process is monitored by process operators who largely rely on visual and acoustic phenomena from the ladle. However, due to lack of measurable signals, the accuracy and suitability of this manual monitoring are problematic. The actual flow of argon gas to the ladle may not be same as the flow gage reading due to leakage along the gas line components. As a result, the actual degree of stirring may not be correctly known. Various researchers have used one-dimensional vibration, and sound and image signals measured from the ladle to predict the degree of stirring inside. They developed online sensors which are indeed to monitor the online stirring phenomena. In this investigation, triaxial vibration signals have been measured from a cold water model which is a model of an industrial ladle. Three flow rate ranges and varying bath heights were used to collect vibration signals. The Fast Fourier Transform was applied to the dataset before it has been analyzed using principal component analysis (PCA) and partial least squares (PLS). PCA was used to unveil the structure in the experimental data. PLS was mainly applied to predict the stirring from the vibration response. It was found that for each flow rate range considered in this study, the informative signals reside in different frequency ranges. The first latent variables in these frequency ranges explain more than 95 pct of the variation in the stirring process for the entire single layer and the double layer data collected from the cold model. PLS analysis in these identified frequency ranges demonstrated that the latent

  9. Application of multivariate statistical techniques in microbial ecology.

    PubMed

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. PMID:26786791

  10. A pairwise interaction model for multivariate functional and longitudinal data

    PubMed Central

    Chiou, Jeng-Min; Müller, Hans-Georg

    2016-01-01

    Functional data vectors consisting of samples of multivariate data where each component is a random function are encountered increasingly often but have not yet been comprehensively investigated. We introduce a simple pairwise interaction model that leads to an interpretable and straightforward decomposition of multivariate functional data and of their variation into component-specific processes and pairwise interaction processes. The latter quantify the degree of pairwise interactions between the components of the functional data vectors, while the component-specific processes reflect the functional variation of a particular functional vector component that cannot be explained by the other components. Thus the proposed model provides an extension of the usual notion of a covariance or correlation matrix for multivariate vector data to functional data vectors and generates an interpretable functional interaction map. The decomposition provided by the model can also serve as a basis for subsequent analysis, such as study of the network structure of functional data vectors. The decomposition of the total variance into componentwise and interaction contributions can be quantified by an \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$R^2$\\end{document}-like decomposition. We provide consistency results for the proposed methods and illustrate the model by applying it to sparsely sampled longitudinal data from the Baltimore Longitudinal Study of Aging, examining the relationships between body mass index and blood fats. PMID:27279664

  11. Application of multivariate outlier detection to fluid velocity measurements

    NASA Astrophysics Data System (ADS)

    Griffin, John; Schultz, Todd; Holman, Ryan; Ukeiley, Lawrence S.; Cattafesta, Louis N.

    2010-07-01

    A statistical-based approach to detect outliers in fluid-based velocity measurements is proposed. Outliers are effectively detected from experimental unimodal distributions with the application of an existing multivariate outlier detection algorithm for asymmetric distributions (Hubert and Van der Veeken, J Chemom 22:235-246, 2008). This approach is an extension of previous methods that only apply to symmetric distributions. For fluid velocity measurements, rejection of statistical outliers, meaning erroneous as well as low probability data, via multivariate outlier rejection is compared to a traditional method based on univariate statistics. For particle image velocimetry data, both tests are conducted after application of the current de facto standard spatial filter, the universal outlier detection test (Westerweel and Scarano, Exp Fluids 39:1096-1100, 2005). By doing so, the utility of statistical outlier detection in addition to spatial filters is demonstrated, and further, the differences between multivariate and univariate outlier detection are discussed. Since the proposed technique for outlier detection is an independent process, statistical outlier detection is complementary to spatial outlier detection and can be used as an additional validation tool.

  12. Multivariate meta-analysis using individual participant data.

    PubMed

    Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R

    2015-06-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  13. Multivariate piecewise exponential survival modeling.

    PubMed

    Li, Yan; Panagiotou, Orestis A; Black, Amanda; Liao, Dandan; Wacholder, Sholom

    2016-06-01

    In this article, we develop a piecewise Poisson regression method to analyze survival data from complex sample surveys involving cluster-correlated, differential selection probabilities, and longitudinal responses, to conveniently draw inference on absolute risks in time intervals that are prespecified by investigators. Extensive simulations evaluate the developed methods with extensions to multiple covariates under various complex sample designs, including stratified sampling, sampling with selection probability proportional to a measure of size (PPS), and a multi-stage cluster sampling. We applied our methods to a study of mortality in men diagnosed with prostate cancer in the Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial to investigate whether a biomarker available from biospecimens collected near time of diagnosis stratifies subsequent risk of death. Poisson regression coefficients and absolute risks of mortality (and the corresponding 95% confidence intervals) for prespecified age intervals by biomarker levels are estimated. We conclude with a brief discussion of the motivation, methods, and findings of the study. PMID:26583951

  14. Multivariate orthogonal regression in astronomy

    NASA Astrophysics Data System (ADS)

    Branham, Richard L., Jr.

    1995-03-01

    Total least squares considers the problem of data reduction when error resides in both the data itself and also in the equations of condition. Error may be found in all of the columns of the matrix of the equations of condition, or merely in some; the latter situation is referred to as a mixed total least squares problem. A covariance matrix may be derived for total least squares. Both memory and operation count requirements are more severe than for ordinary least squares: about four times more memory and, if the problem involves n unknowns, 15n + 4 more arithmetic operations. The method, applicable in any situation where ordinary least squares is relevant, including the estimation of scaled variables, is applied to three examples, one artificial and two taken from astronomy: the estimation of various parameters of Galactic kinematics, and the differential correction of a planetary orbit. In these two examples the results from total least squares are superior to those from ordinary least squares.

  15. Multivariate Piecewise Exponential Survival Modeling

    PubMed Central

    Li, Yan; Panagiotou, Orestis A.; Black, Amanda; Liao, Dandan; Wacholder, Sholom

    2016-01-01

    Summary In this article, we develop a piecewise Poisson regression method to analyze survival data from complex sample surveys involving cluster-correlated, differential selection probabilities, and longitudinal responses, to conveniently draw inference on absolute risks in time intervals that are prespecified by investigators. Extensive simulations evaluate the developed methods with extensions to multiple covariates under various complex sample designs, including stratified sampling, sampling with selection probability proportional to a measure of size (PPS), and a multi-stage cluster sampling. We applied our methods to a study of mortality in men diagnosed with prostate cancer in the Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial to investigate whether a biomarker available from biospecimens collected near time of diagnosis stratifies subsequent risk of death. Poisson regression coefficients and absolute risks of mortality (and the corresponding 95% confidence intervals) for prespecified age intervals by biomarker levels are estimated. We conclude with a brief discussion of the motivation, methods, and findings of the study. PMID:26583951

  16. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1995-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  17. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  18. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  19. Determination of sulphate in water and biodiesel samples by a sequential injection analysis--multivariate curve resolution method.

    PubMed

    del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar

    2010-08-31

    A spectrophotometric sequential injection analysis (SIA-DAD) method linked to multivariate curve resolution-alternating least squares (MCR-ALS) has been developed for sulphate determination. This method involves the reaction, inside the tubes of the SIA system, of sulphate with barium-dimethylsulphonazo (III) complex, Ba-DMSA (III), displacing Ba(2+) from the complex and forming DMSA (III). When the reaction products reach the detector a data matrix is obtained, which allows a second-order calibration to be developed. The experimental conditions (concentration and sample and reagent volumes) to obtain the highest sensitivity have been chosen applying a 2(4-1) fractional factorial design. The proposed sequential flow procedure permits up to 15 mg SO(4)(2-) L(-1) to be determined with a limit of detection of 1.42 mg L(-1) and it is able to monitor sulphate in samples at a frequency of 15 samples per hour. The method was applied to determine sulphate in natural and residual waters and in biodiesel. The reliability of the method was established for water samples by parallel determination using a standard turbidimetric method for sulphate in natural and residual water samples with results within statistical variation. For biodiesel samples, the method was validated comparing the concentration of some spiked samples with the expected concentration using a test-t. PMID:20800738

  20. Implicit and Explicit Spacecraft Gyro Calibration

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2004-01-01

    This paper presents a comparison between two approaches to sensor calibration. According to one approach, called explicit, an estimator compares the sensor readings to reference readings, and uses the difference between the two to estimate the calibration parameters. According to the other approach, called implicit, the sensor error is integrated to form a different entity, which is then compared with a reference quantity of this entity, and the calibration parameters are inferred from the difference. In particular this paper presents the comparison between these approaches when applied to in-flight spacecraft gyro calibration. Reference spacecraft rate is needed for gyro calibration when using the explicit approach; however, such reference rates are not readily available for in-flight calibration. Therefore the calibration parameter-estimator is expanded to include the estimation of that reference rate, which is based on attitude measurements in the form of attitude-quaternion. A comparison between the two approaches is made using simulated data. It is concluded that the performances of the two approaches are basically comparable. Sensitivity tests indicate that the explicit filter results are essentially insensitive to variations in given spacecraft dynamics model parameters.

  1. Calibration of Cryogenic Thermometers for the Lhc

    NASA Astrophysics Data System (ADS)

    Balle, Ch.; Casas-Cubillos, J.; Vauthier, N.; Thermeau, J. P.

    2008-03-01

    6000 cryogenic temperature sensors of resistive type covering the range from room temperature down to 1.6 K are installed on the LHC machine. In order to meet the stringent requirements on temperature control of the superconducting magnets, each single sensor needs to be calibrated individually. In the framework of a special contribution, IPN (Institut de Physique Nucléaire) in Orsay, France built and operated a calibration facility with a throughput of 80 thermometers per week. After reception from the manufacturer, the thermometer is first assembled onto a support specific to the measurement environment, and then thermally cycled ten times and calibrated at least once from 1.6 to 300 K. The procedure for each of these interventions includes various measurements and the acquired data is recorded in an ORACLE®-database. Furthermore random calibrations on some samples are executed at CERN to crosscheck the coherence between the approximation data obtained by both IPN and CERN. In the range of 1.5 K to 30 K, the calibration apparatuses at IPN and CERN are traceable to standards maintained in a national metrological laboratory by using a set of rhodium-iron temperature sensors of metrological quality. This paper presents the calibration procedure, the quality assurance applied, the results of the calibration campaigns and the return of experience.

  2. Revised landsat-5 thematic mapper radiometric calibration

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Barsi, J.A.

    2007-01-01

    Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed. ?? 2007 IEEE.

  3. Comparing G: multivariate analysis of genetic variation in multiple populations.

    PubMed

    Aguirre, J D; Hine, E; McGuigan, K; Blows, M W

    2014-01-01

    The additive genetic variance-covariance matrix (G) summarizes the multivariate genetic relationships among a set of traits. The geometry of G describes the distribution of multivariate genetic variance, and generates genetic constraints that bias the direction of evolution. Determining if and how the multivariate genetic variance evolves has been limited by a number of analytical challenges in comparing G-matrices. Current methods for the comparison of G typically share several drawbacks: metrics that lack a direct relationship to evolutionary theory, the inability to be applied in conjunction with complex experimental designs, difficulties with determining statistical confidence in inferred differences and an inherently pair-wise focus. Here, we present a cohesive and general analytical framework for the comparative analysis of G that addresses these issues, and that incorporates and extends current methods with a strong geometrical basis. We describe the application of random skewers, common subspace analysis, the 4th-order genetic covariance tensor and the decomposition of the multivariate breeders equation, all within a Bayesian framework. We illustrate these methods using data from an artificial selection experiment on eight traits in Drosophila serrata, where a multi-generational pedigree was available to estimate G in each of six populations. One method, the tensor, elegantly captures all of the variation in genetic variance among populations, and allows the identification of the trait combinations that differ most in genetic variance. The tensor approach is likely to be the most generally applicable method to the comparison of G-matrices from any sampling or experimental design. PMID:23486079

  4. Collision prediction models using multivariate Poisson-lognormal regression.

    PubMed

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972

  5. A direct-gradient multivariate index of biotic condition

    USGS Publications Warehouse

    Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.

    2012-01-01

    Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.

  6. A practical approach for linearity assessment of calibration curves under the International Union of Pure and Applied Chemistry (IUPAC) guidelines for an in-house validation of method of analysis.

    PubMed

    Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu

    2010-01-01

    Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC. PMID:20922968

  7. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  8. What`s new in multivariable predictive control

    SciTech Connect

    Colwell, L.W.; Poe, W.A.; Papadopoulos, M.N.; Gamez, J.P.

    1995-11-01

    Multivariable control techniques have been successfully applied to a variety of gas processing operations. The technology has been applied to CO{sub 2} recovery towers, cryogenic demethanizers, lean oil absorbers, rich oil demethanizers, rich oil stills, deethanizers, depropanizers, deisobutanizers, amine treaters, sulfur recovery units, nitrogen rejection units and compressors. The system has been developed with a modular structure and employs process model based predictions of key plant variables. Modules for each type of operation are available and, with minimal modification, can be applied to a specific unit since the key plant variables are usually common between plants and are affected by similar disturbances. Adaptive nonlinear multivariable control models allow continuous operation at optimum conditions within plant constraints. In most applications a personal computer (PC) containing the control software dan supervisory control and data acquisition (SCADA) system operates under a UNIX operating system and interfaces with the plant`s existing control system. The PC-based system dispatches setpoints that have been calculated to optimize on-line the profitability of the plant. A typical project can be implemented in 4-6 months with a payout of less than a year by increasing natural gas liquids (NGL) revenues and decreasing plant operating costs. This paper describes the technology and the initial installation results.

  9. Gemini facility calibration unit

    NASA Astrophysics Data System (ADS)

    Ramsay-Howat, Suzanne K.; Harris, John W.; Gostick, David C.; Laidlaw, Ken; Kidd, Norrie; Strachan, Mel; Wilson, Ken

    2000-08-01

    High-quality, efficient calibration instruments is a pre- requisite for the modern observatory. Each of the Gemini telescopes will be equipped with identical facility calibration units (GCALs) designed to provide wavelength and flat-field calibrations for the suite of instruments. The broad range of instrumentation planned for the telescopes heavily constrains the design of GCAL. Short calibration exposures are required over wavelengths from 0.3micrometers to 5micrometers , field sizes up to 7 arcminutes and spectral resolution from R-5 to 50,000. The output from GCAL must mimic the f-16 beam of the telescope and provide a uniform illumination of the focal plane. The calibration units are mounted on the Gemini Instrument Support Structure, two meters from the focal pane, necessitating the use of large optical components. We will discuss the opto-mechanical design of the Gemini calibration unit, with reference to those feature which allow these stringent requirements to be met. A novel reflector/diffuser unit replaces the integration sphere more normally found in calibration systems. The efficiency of this system is an order of magnitude greater than for an integration sphere. A system of two off-axis mirrors reproduces the telescope pupil and provides the 7 foot focal plane. The results of laboratory test of the uniformity and throughput of the GCAL will be presented.

  10. The COS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hodge, Philip E.; Kaiser, M. E.; Keyes, C. D.; Ake, T. B.; Aloisi, A.; Friedman, S. D.; Oliveira, C. M.; Shaw, B.; Sahnow, D. J.; Penton, S. V.; Froning, C. S.; Beland, S.; Osterman, S.; Green, J.; COS/STIS STScI Team; IDT, COS

    2008-05-01

    The Cosmic Origins Spectrograph, COS, (Green, J, et al., 2000, Proc SPIE, 4013) will be installed in the Hubble Space Telescope (HST) during the next servicing mission. This will be the most sensitive ultraviolet spectrograph ever flown aboard HST. The program (CALCOS) for pipeline calibration of HST/COS data has been developed by the Space Telescope Science Institute. As with other HST pipelines, CALCOS uses an association table to list the data files to be included, and it employs header keywords to specify the calibration steps to be performed and the reference files to be used. COS includes both a cross delay line detector for the far ultraviolet (FUV) and a MAMA detector for the near ultraviolet (NUV). CALCOS uses a common structure for both channels, but the specific calibration steps differ. The calibration steps include pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. A 1-D spectrum will be extracted and flux calibrated. Data will normally be taken in TIME-TAG mode, recording the time and location of each detected photon, although ACCUM mode will also be supported. The wavelength calibration uses an on-board spectral line lamp. To enable precise wavelength calibration, default operations will simultaneously record the science target and lamp spectrum by executing brief (tag-flash) lamp exposures at least once per external target exposure.

  11. Steady state decoupling and design of linear multivariable systems

    NASA Technical Reports Server (NTRS)

    Huang, J. Y.; Thaler, G. J.

    1974-01-01

    A constructive criterion for decoupling the steady states of linear multivariable systems is developed. The criterion consists of n(n-1) inequalities with the type numbers of the compensator transfer functions as the unknowns. These unknowns can be chosen to satisfy the inequalities and hence achieve a steady state decoupling scheme. It turns out that pure integrators in the loops play an important role. An extended root locus design method is then developed to take care of the stability and transient response. The overall procedure is applied to the compensation design for STOL C-8A aircraft in the approach mode.

  12. Technical note: Multiple wavelet coherence for untangling scale-specific and localized multivariate relationships in geosciences

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Si, Bing Cheng

    2016-08-01

    The scale-specific and localized bivariate relationships in geosciences can be revealed using bivariate wavelet coherence. The objective of this study was to develop a multiple wavelet coherence method for examining scale-specific and localized multivariate relationships. Stationary and non-stationary artificial data sets, generated with the response variable as the summation of five predictor variables (cosine waves) with different scales, were used to test the new method. Comparisons were also conducted using existing multivariate methods, including multiple spectral coherence and multivariate empirical mode decomposition (MEMD). Results show that multiple spectral coherence is unable to identify localized multivariate relationships, and underestimates the scale-specific multivariate relationships for non-stationary processes. The MEMD method was able to separate all variables into components at the same set of scales, revealing scale-specific relationships when combined with multiple correlation coefficients, but has the same weakness as multiple spectral coherence. However, multiple wavelet coherences are able to identify scale-specific and localized multivariate relationships, as they are close to 1 at multiple scales and locations corresponding to those of predictor variables. Therefore, multiple wavelet coherence outperforms other common multivariate methods. Multiple wavelet coherence was applied to a real data set and revealed the optimal combination of factors for explaining temporal variation of free water evaporation at the Changwu site in China at multiple scale-location domains. Matlab codes for multiple wavelet coherence were developed and are provided in the Supplement.

  13. Depth-based hotspot identification and multivariate ranking using the full Bayes approach.

    PubMed

    El-Basyouny, Karim; Sayed, Tarek

    2013-01-01

    Although the multivariate structure of traffic accidents has been recognized in the safety literature for over a decade now, univariate identification and ranking of hotspots is still dominant. The present paper advocates the use of multivariate identification and ranking of hotspots based on statistical depth functions, which are useful tools for non-parametric multivariate analysis as they provide center-out ordering of multivariate data. Thus, a depth-based multivariate method is proposed for the identification and ranking of hotspots using the full Bayes (FB) approach. The proposed method is applied to a sample of 236 signalized intersections in the Greater Vancouver Area. Various multivariate Poisson log-normal (MVPLN) models were used for data analysis. For each model, the FB posterior estimates were obtained using the Markov Chains Monte Carlo (MCMC) techniques and several goodness-of-fit measures were used for model selection. Using a depth threshold of 0.025, the proposed method identified 26 intersections (11%) as potential hotspots. The choice of a depth threshold is a delicate decision and it is suggested to determine the threshold according to the amount of funding available for safety improvement, which is the usual practice in univariate hotspot identification (HSID). Also, the results show that the performance of the proposed multivariate depth-based FB HSID method is superior to that of an analogous method based on the depths of accident frequency (AF) in terms of sensitivity, specificity and the sum of norms (lengths) of Poisson mean vectors. PMID:23018036

  14. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  15. Jet energy calibration at the LHC

    SciTech Connect

    Schwartzman, Ariel

    2015-11-10

    In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiple p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.

  16. Jet energy calibration at the LHC

    DOE PAGESBeta

    Schwartzman, Ariel

    2015-11-10

    In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiplemore » p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.« less

  17. Calibrating page sized Gafchromic EBT3 films

    SciTech Connect

    Crijns, W.; Maes, F.; Heide, U. A. van der; Van den Heuvel, F.

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittance values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal

  18. DIRBE External Calibrator (DEC)

    NASA Technical Reports Server (NTRS)

    Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.

    1987-01-01

    Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.

  19. Airdata Measurement and Calibration

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.

    1995-01-01

    This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.

  20. Lidar Calibration Centre

    NASA Astrophysics Data System (ADS)

    Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe

    2016-06-01

    This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.

  1. Compact radiometric microwave calibrator

    SciTech Connect

    Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.

    2006-06-15

    The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency.

  2. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  3. A multivariable control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Seraji, H.

    1991-01-01

    The article puts forward a simple scheme for multivariable control of robot manipulators to achieve trajectory tracking. The scheme is composed of an inner loop stabilizing controller and an outer loop tracking controller. The inner loop utilizes a multivariable PD controller to stabilize the robot by placing the poles of the linearized robot model at some desired locations. The outer loop employs a multivariable PID controller to achieve input-output decoupling and trajectory tracking. The gains of the PD and PID controllers are related directly to the linearized robot model by simple closed-form expressions. The controller gains are updated on-line to cope with variations in the robot model during gross motion and for payload change. Alternatively, the use of high gain controllers for gross motion and payload change is discussed. Computer simulation results are given for illustration.

  4. Biological sequence classification with multivariate string kernels.

    PubMed

    Kuksa, Pavel P

    2013-01-01

    String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708

  5. Biological Sequence Analysis with Multivariate String Kernels.

    PubMed

    Kuksa, Pavel P

    2013-03-01

    String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193

  6. Multivariate multiscale entropy for brain consciousness analysis.

    PubMed

    Ahmed, Mosabber Uddin; Li, Ling; Cao, Jianting; Mandic, Danilo P

    2011-01-01

    The recently introduced multiscale entropy (MSE) method accounts for long range correlations over multiple time scales and can therefore reveal the complexity of biological signals. The existing MSE algorithm deals with scalar time series whereas multivariate time series are common in experimental and biological systems. To that cause, in this paper the MSE method is extended to the multivariate case. This allows us to gain a greater insight into the complexity of the underlying signal generating system, producing multifaceted and more robust estimates than standard single channel MSE. Simulations on both synthetic data and brain consciousness analysis support the approach. PMID:22254434

  7. Calibrated Properties Model

    SciTech Connect

    C.F. Ahlers, H.H. Liu

    2001-12-18

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  8. SRAM Detector Calibration

    NASA Technical Reports Server (NTRS)

    Soli, G. A.; Blaes, B. R.; Beuhler, M. G.

    1994-01-01

    Custom proton sensitive SRAM chips are being flown on the BMDO Clementine missions and Space Technology Research Vehicle experiments. This paper describes the calibration procedure for the SRAM proton detectors and their response to the space environment.

  9. Roundness calibration standard

    DOEpatents

    Burrus, Brice M.

    1984-01-01

    A roundness calibration standard is provided with a first arc constituting the major portion of a circle and a second arc lying between the remainder of the circle and the chord extending between the ends of said first arc.

  10. Calibrated Properties Model

    SciTech Connect

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  11. Forensic discrimination of blue ballpoint pens on documents by laser ablation inductively coupled plasma mass spectrometry and multivariate analysis.

    PubMed

    Alamilla, Francisco; Calcerrada, Matías; García-Ruiz, Carmen; Torre, Mercedes

    2013-05-10

    The differentiation of blue ballpoint pen inks written on documents through an LA-ICP-MS methodology is proposed. Small common office paper portions containing ink strokes from 21 blue pens of known origin were cut and measured without any sample preparation. In a first step, Mg, Ca and Sr were proposed as internal standards (ISs) and used in order to normalize elemental intensities and subtract background signals from the paper. Then, specific criteria were designed and employed to identify target elements (Li, V, Mn, Co, Ni, Cu, Zn, Zr, Sn, W and Pb) which resulted independent of the IS chosen in a 98% of the cases and allowed a qualitative clustering of the samples. In a second step, an elemental-related ratio (ink ratio) based on the targets previously identified was used to obtain mass independent intensities and perform pairwise comparisons by means of multivariate statistical analyses (MANOVA, Tukey's HSD and T2 Hotelling). This treatment improved the discrimination power (DP) and provided objective results, achieving a complete differentiation among different brands and a partial differentiation within pen inks from the same brands. The designed data treatment, together with the use of multivariate statistical tools, represents an easy and useful tool for differentiating among blue ballpoint pen inks, with hardly sample destruction and without the need for methodological calibrations, being its use potentially advantageous from a forensic-practice standpoint. To test the procedure, it was applied to analyze real handwritten questioned contracts, previously studied by the Department of Forensic Document Exams of the Criminalistics Service of Civil Guard (Spain). The results showed that all questioned ink entries were clustered in the same group, being those different from the remaining ink on the document. PMID:23597731

  12. Multivariate distributions of soil hydraulic parameters

    NASA Astrophysics Data System (ADS)

    Qu, Wei; Pachepsky, Yakov; Huisman, Johan Alexander; Martinez, Gonzalo; Bogena, Heye; Vereecken, Harry

    2014-05-01

    Statistical distributions of soil hydraulic parameters have to be known when synthetic fields of soil hydraulic properties need to be generated in ensemble modeling of soil water dynamics and soil water content data assimilation. Pedotransfer functions that provide statistical distributions of water retention and hydraulic conductivity parameters for textural classes are most often used in the parameter field generation. Presence of strong correlations can substantially influence the parameter generation results. The objective of this work was to review and evaluate available data on correlations between van Genuchten-Mualem (VGM) model parameters. So far, two different approaches were developed to estimate these correlations. The first approach uses pedotransfer functions to generate VGM parameters for a large number of soil compositions within a textural class, and then computes parameter correlations for each of the textural classes. The second approach computes the VGM parameter correlations directly from parameter values obtained by fitting VGM model to measured water retention and hydraulic conductivity data for soil samples belonging to a textural class. Carsel and Parish (1988) used the Rawls et al. (1982) pedotransfer functions, and Meyer et al. (1997) used the Rosetta pedotransfer algorithms (Schaap, 2002) to develop correlations according to the first approach. We used the UNSODA database (Nemes et al. 2001), the US Southern Plains database (Timlin et al., 1999), and the Belgian database (Vereecken et al., 1989, 1990) to apply the second approach. A substantial number of considerable (>0.7) correlation coefficients were found. Large differences were encountered between parameter correlations obtained with different approaches and different databases for the same textural classes. The first of the two approaches resulted in generally higher values of correlation coefficients between VGM parameters. However, results of the first approach application depend

  13. HAWC Timing Calibration

    NASA Astrophysics Data System (ADS)

    Kelley-Hoskins, Nathan; Huentemeyer, Petra; Matthews, John; Dingus, Brenda; HAWC Collaboration

    2011-04-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation high sensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro), an array of closely packed water tanks with 3 PMTs each is used. The cosmic ray's direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.1 degrees. We propose to use a laser calibration system, patterned after the calibration system in Milagro. The HAWC optical calibration system uses less than 1 ns laser light pulses, directed into two optical fiber networks. Each network will use optical fan-outs and switches to direct light to specific tanks. The first network is used to measure the light transit time out to each pair of tanks, and the second network sends light to each tank, calibrating each tank's 3 PMTs. Time slewing corrections will be made using neutral density filters to control the light intensity over 4 orders of magnitude. This system is envisioned to run both continuously at a low rate, or at a high rate with many intensity levels. In this presentation, we present the design of the calibration system and first measurements of its performance.

  14. Redox State of Iron in Lunar Glasses using X-ray Absorption Spectroscopy and Multivariate Analysis

    NASA Astrophysics Data System (ADS)

    Dyar, M. D.; McCanta, M. C.; Lanzirotti, A.; Sutton, S. R.; Carey, C. J.; Mahadevan, S.; Rutherford, M. J.

    2014-12-01

    The oxidation state of igneous materials on a planet is a critically-important variable in understanding magma evolution on bodies in our solar system. However, direct and indirect methods for quantifying redox states are challenging, especially across the broad spectrum of silicate glass compositions found on airless bodies. On the Moon, early Mössbauer studies of bulk samples suggested the presence of significant Fe3+ (>10%) in lunar glasses (green, orange, brown); lunar analog glasses synthesized at fO2 <10-11 have similar Fe3+. All these Mössbauer spectra are challenging to interpret due to the presence of multiple coordination environments in the glasses. X-ray absorption spectroscopy (XAS) allows pico- and nano-scale interrogation of primitive planetary materials using the pre-edge, main edge, and EXAFS regions of absorption edge spectra. Current uses of XAS require availability of standards with compositions similar to those of unknowns and complex procedures for curve-fitting of pre-edge features that produce results with poorly constrained accuracy. A new approach to accurate and quantitative redox measurements with XAS is to couple use of spectra from synthetic glass standards covering a broad compositional range with multivariate analysis (MVA) techniques. Mössbauer and XAS spectra from a suite of 33 synthetic glass standards covering a wide range of compositions and fO2(Dyar et al., this meeting) were used to develop a MVA model that utilizes valuable predictive information not only in the major spectral peaks/features, but in all channels of the XAS region. Algorithms for multivariate analysis t were used to "learn" the characteristics of a data set as a function of varying spectral characteristics. These models were applied to the study of lunar glasses, which provide a challenging test case for these newly-developed techniques due to their very low fO2. Application of the new XAS calibration model to Apollo 15 green (15426, 15427 and 15425

  15. Multivariate Outliers. Review of the Literature.

    ERIC Educational Resources Information Center

    Jarrell, Michele G.

    Research in the area of multivariate outliers is reviewed, emphasizing the problems associated with definition and identification. Treatment of the problem can be traced to 1777 and the work of D. Bernoulli. Most of the many procedures developed for identifying outliers proceed sequentially starting with the most aberrant observation, or proceed…

  16. DUALITY IN MULTIVARIATE RECEPTOR MODEL. (R831078)

    EPA Science Inventory

    Multivariate receptor models are used for source apportionment of multiple observations of compositional data of air pollutants that obey mass conservation. Singular value decomposition of the data leads to two sets of eigenvectors. One set of eigenvectors spans a space in whi...

  17. Using Matlab in a Multivariable Calculus Course.

    ERIC Educational Resources Information Center

    Schlatter, Mark D.

    The benefits of high-level mathematics packages such as Matlab include both a computer algebra system and the ability to provide students with concrete visual examples. This paper discusses how both capabilities of Matlab were used in a multivariate calculus class. Graphical user interfaces which display three-dimensional surfaces, contour plots,…

  18. Multivariate statistical mapping of spectroscopic imaging data.

    PubMed

    Young, Karl; Govind, Varan; Sharma, Khema; Studholme, Colin; Maudsley, Andrew A; Schuff, Norbert

    2010-01-01

    For magnetic resonance spectroscopic imaging studies of the brain, it is important to measure the distribution of metabolites in a regionally unbiased way; that is, without restrictions to a priori defined regions of interest. Since magnetic resonance spectroscopic imaging provides measures of multiple metabolites simultaneously at each voxel, there is furthermore great interest in utilizing the multidimensional nature of magnetic resonance spectroscopic imaging for gains in statistical power. Voxelwise multivariate statistical mapping is expected to address both of these issues, but it has not been previously employed for spectroscopic imaging (SI) studies of brain. The aims of this study were to (1) develop and validate multivariate voxel-based statistical mapping for magnetic resonance spectroscopic imaging and (2) demonstrate that multivariate tests can be more powerful than univariate tests in identifying patterns of altered brain metabolism. Specifically, we compared multivariate to univariate tests in identifying known regional patterns in simulated data and regional patterns of metabolite alterations due to amyotrophic lateral sclerosis, a devastating brain disease of the motor neurons. PMID:19953514

  19. Intermittent control of unstable multivariate systems.

    PubMed

    Loram, I; Gawthrop, P; Gollee, H

    2015-08-01

    A sensorimotor architecture inspired from biological, vertebrate control should (i) explain the interface between high dimensional sensory analysis, low dimensional goals and high dimensional motor mechanisms and (ii) provide both stability and flexibility. Our interest concerns whether single-input-single-output intermittent control (SISO_IC) generalized to multivariable intermittent control (MIC) can meet these requirements.We base MIC on the continuous-time observer-predictorstate-feedback architecture. MIC uses event detection. A system matched hold (SMH), using the underlying continuoustime optimal control design, generates multivariate open-loop control signals between samples of the predicted state. Combined, this serial process provides a single-channel of control with optimised sensor fusion and motor synergies. Quadratic programming provides constrained, optimised equilibrium control design to handle unphysical configurations, redundancy and provides minimum, necessary reduction of open loop instability through optimised joint impedance. In this multivariate form, dimensionality is linked to goals rather than neuromuscular or sensory degrees of freedom. The biological and engineering rationale for intermittent rather than continuous multivariate control, is that the generalised hold sustains open loop predictive control while the open loop interval provides time within the feedback loop for online centralised, state dependent optimisation and selection. PMID:26736539

  20. MDL and RMSEP assessment of spectral pretreatments by adding different noises in calibration/validation datasets.

    PubMed

    Zhao, Na; Wu, Zhisheng; Cheng, Yaqian; Shi, Xinyuan; Qiao, Yanjiang

    2016-06-15

    In multivariate calibration, the optimization of pretreatment methods is usually according to the prediction error and there is a lack of robustness evaluation. This study investigated the robustness of pretreatment methods by adding different simulate noises to validation dataset, calibration and validation datasets, respectively. The root mean squared error of prediction (RMSEP) and multivariate detection limits (MDL) were simultaneously calculated to assess the robustness of different pretreatment methods. The result with two different near-infrared (NIR) datasets illustrated that Multiplicative Scatter Correction (MSC) and Standard normal variate (SNV) were substantially more robust to additive noise with smaller REMSP and MDL value. PMID:27031447

  1. MDL and RMSEP assessment of spectral pretreatments by adding different noises in calibration/validation datasets

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhisheng; Cheng, Yaqian; Shi, Xinyuan; Qiao, Yanjiang

    2016-06-01

    In multivariate calibration, the optimization of pretreatment methods is usually according to the prediction error and there is a lack of robustness evaluation. This study investigated the robustness of pretreatment methods by adding different simulate noises to validation dataset, calibration and validation datasets, respectively. The root mean squared error of prediction (RMSEP) and multivariate detection limits (MDL) were simultaneously calculated to assess the robustness of different pretreatment methods. The result with two different near-infrared (NIR) datasets illustrated that Multiplicative Scatter Correction (MSC) and Standard normal variate (SNV) were substantially more robust to additive noise with smaller REMSP and MDL value.

  2. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the

  3. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J.

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  4. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model

    PubMed Central

    Snell, Kym I.E.; Hua, Harry; Debray, Thomas P.A.; Ensor, Joie; Look, Maxime P.; Moons, Karel G.M.; Riley, Richard D.

    2016-01-01

    Objectives Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of “good” performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of “good” performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of “good” performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. PMID:26142114

  5. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. George

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.

  6. Multivariate Statistical Analysis Software Technologies for Astrophysical Research Involving Large Data Bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. G.

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has

  7. Experimental Design, Near-Infrared Spectroscopy, and Multivariate Calibration: An Advanced Project in a Chemometrics Course

    ERIC Educational Resources Information Center

    de Oliveira, Rodrigo R.; das Neves, Luiz S.; de Lima, Kassio M. G.

    2012-01-01

    A chemometrics course is offered to students in their fifth semester of the chemistry undergraduate program that includes an in-depth project. Students carry out the project over five weeks (three 8-h sessions per week) and conduct it in parallel to other courses or other practical work. The students conduct a literature search, carry out…

  8. Comparison of Three Near Infrared Spectrophotometers for Infestation Detection in Wild Blueberries Using Multivariate Calibration Models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A near-infrared spectroscopy (NIRS) method for automated non-destructive detection of insect infestation internal to small fruit is desirable because of the zero-to-zero tolerance of the fresh and processed fruit markets. Three NIRS instruments: the Ocean Optics SD2000, the Perten DA7000 and the Ori...

  9. Multivariate calibration of the degree of crystallinity in intact pellets by X-ray powder diffraction.

    PubMed

    Nikowitz, Krisztina; Domján, Attila; Pintye-Hódi, Klára; Regdon, Géza

    2016-04-11

    XRPD is the method of choice to determine crystalline content in an amorphous environment. While several studies describe its use on powders, little information is available on its performance on finished products. The method's use may be limited not only by the need of sample pretreatment and its validation but also by the propensity of some materials to recrystallize when exposed to heat or mechanical stress. In this work the authors describe an attempt at constructing a model based on the XRPD measurement of intact layered pellets using univariate methods based on peak heights and PLS regression. Results indicate that neither the goodness-of-fit (below 0.9 for all tested variables), nor the RMSEC values (above 5 for all tested variables) of any model based on peak height were good enough to consider them for everyday use. PLS regression however provided a model with improved characteristics (R(2)=0.9581, RMSEC=3.04) despite the low API content and individual loading characteristics also reflected the validity of the model. PLS analysis also indicated that a specific sample may be different in some formulation characteristic that did not register on other examinations. This further indicates the method's usefulness in the analysis of intact dosage forms. PMID:26899205

  10. Using variable combination population analysis for variable selection in multivariate calibration.

    PubMed

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-01

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. PMID:25682424

  11. Barometric calibration of a luminescent oxygen probe.

    PubMed

    Golub, Aleksander S; Pittman, Roland N

    2016-04-01

    The invention of the phosphorescence quenching method for the measurement of oxygen concentration in blood and tissue revolutionized physiological studies of oxygen transport in living organisms. Since the pioneering publication by Vanderkooi and Wilson in 1987, many researchers have contributed to the measurement of oxygen in the microcirculation, to oxygen imaging in tissues and microvessels, and to the development of new extracellular and intracellular phosphorescent probes. However, there is a problem of congruency in data from different laboratories, because of interlaboratory variability of the calibration coefficients in the Stern-Volmer equation. Published calibrations for a common oxygen probe, Pd-porphyrin + bovine serum albumin (BSA), vary because of differences in the techniques used. These methods are used for the formation of oxygen standards: chemical titration, calibrated gas mixtures, and an oxygen electrode. Each method in turn also needs calibration. We have designed a barometric method for the calibration of oxygen probes by using a regulated vacuum to set multiple PO2 standards. The method is fast and accurate and can be applied to biological fluids obtained during or after an experiment. Calibration over the full physiological PO2 range (1-120 mmHg) takes ∼15 min and requires 1-2 mg of probe. PMID:26846556

  12. A preliminary investigation of the dynamic force-calibration of a magnetic suspension and balance system

    NASA Technical Reports Server (NTRS)

    Goodyer, M. J.

    1985-01-01

    The aerodynamic forces and moments acting upon a magnetically suspended wind tunnel model are derived from calibrations of suspension electro magnet currents against known forces. As an alternative to the conventional calibration method of applying steady forces to the model, early experiences with dynamic calibration are outlined, that is a calibration obtained by oscillating a model in suspension and deriving a force/current relationship from its inertia force and the unsteady components of currents. Advantages of dynamic calibration are speed and simplicity. The two methods of calibration applied to one force component show good agreement.

  13. Calibration of radionuclide calibrators in Canadian hospitals

    SciTech Connect

    Santry, D.C.

    1986-01-01

    The major user of radioactive isotopes in Canada is the medical profession. Because of this a program has been initiated at the National Research Council of Canada (NRCC) to assist the nuclear medicine community to determine more accurately, the rather large amounts of radioactive materials administered to patients either for therapeutic or medical diagnostics. Since radiation exposure to the human body has deleterious effects, it is important for the patient that the correct amount of radioactive material is administered to minimize the induction of a fatal cancer at a later time. Hospitals in many other countries have a legal requirement to have their instruments routinely calibrated and have previously entered into intercomparisons with other hospitals or their national standards laboratories. In Canada, hospitals and clinics can participate on a voluntary basis to have the proper operation of measuring devices (radionuclide calibrators in particular) examined through intercomparisons. The program looks primarily at laboratory performance. This includes not only the instrument's performance but the performance of the individual doing the procedure and the technical procedure or method employed. In an effort to provide personal assistance to those having problems, it is essential that the comparisons should be pertinent to the daily work of the laboratory and that the most capable technologist not be selected to carry out the assay.

  14. OPTIMUM FREQUENCY OF CALIBRATION MONITORING

    EPA Science Inventory

    The paper develops an algorithm by which to compute the optimal frequency of calibration monitoring to minimize the total cost of analyzing a set of samples and the required calibration standards. Optimum calibration monitoring is needed because of the high cost and calibration d...

  15. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  16. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  17. GTC Photometric Calibration

    NASA Astrophysics Data System (ADS)

    di Cesare, M. A.; Hammersley, P. L.; Rodriguez Espinosa, J. M.

    2006-06-01

    We are currently developing the calibration programme for GTC using techniques similar to the ones use for the space telescope calibration (Hammersley et al. 1998, A&AS, 128, 207; Cohen et al. 1999, AJ, 117, 1864). We are planning to produce a catalogue with calibration stars which are suitable for a 10-m telescope. These sources will be not variable, non binary and do not have infrared excesses if they are to be used in the infrared. The GTC science instruments require photometric calibration between 0.35 and 2.5 microns. The instruments are: OSIRIS (Optical System for Imaging low Resolution Integrated Spectroscopy), ELMER and EMIR (Espectrógrafo Multiobjeto Infrarrojo) and the Acquisition and Guiding boxes (Di Césare, Hammersley, & Rodriguez Espinosa 2005, RevMexAA Ser. Conf., 24, 231). The catalogue will consist of 30 star fields distributed in all of North Hemisphere. We will use fields containing sources over the range 12 to 22 magnitude, and spanning a wide range of spectral types (A to M) for the visible and near infrared. In the poster we will show the method used for selecting these fields and we will present the analysis of the data on the first calibration fields observed.

  18. Calibration Monitoring for Sensor Calibration Interval Extension: Gaps in the Current Science Base

    SciTech Connect

    Coble, Jamie B.; Ramuhalli, Pradeep; Meyer, Ryan M.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara

    2012-10-09

    Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. International application of calibration monitoring has shown that sensors may operate for longer periods within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. Online monitoring (OLM) can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of OLM for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This paper summarizes a recent state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and OLM algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several technical needs were identified, including an understanding of the impacts of sensor degradation on measurements for both conventional and emerging sensors; the quantification of uncertainty in online calibration assessment; determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity.

  19. A Review of Sensor Calibration Monitoring for Calibration Interval Extension in Nuclear Power Plants

    SciTech Connect

    Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara

    2012-08-31

    Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. Online monitoring can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. International application of calibration monitoring, such as at the Sizewell B plant in United Kingdom, has shown that sensors may operate for eight years, or longer, within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This report presents a state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and online monitoring algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several needs are identified, including the quantification of uncertainty in online calibration assessment; accurate determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity. Understanding the degradation of sensors and the impact of this degradation on signals is key to

  20. Calibration Of Airborne Visible/IR Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Vane, G. A.; Chrien, T. G.; Miller, E. A.; Reimer, J. H.

    1990-01-01

    Paper describes laboratory spectral and radiometric calibration of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) applied to all AVIRIS science data collected in 1987. Describes instrumentation and procedures used and demonstrates that calibration accuracy achieved exceeds design requirements. Developed for use in remote-sensing studies in such disciplines as botany, geology, hydrology, and oceanography.

  1. Calibration and Temperature Profile of a Tungsten Filament Lamp

    ERIC Educational Resources Information Center

    de Izarra, Charles; Gitton, Jean-Michel

    2010-01-01

    The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…

  2. A miniature remote deadweight calibrator

    NASA Astrophysics Data System (ADS)

    Supplee, Frank H., Jr.; Tcheng, Ping

    A miniature, computer-controlled, deadweight calibrator was developed to remotely calibrate a force transducer mounted in a cryogenic chamber. This simple mechanism allows automatic loading and unloading of deadweights placed onto a skin friction balance during calibrations. Equipment for the calibrator includes a specially designed set of five interlocking 200-milligram weights, a motorized lifting platform, and a controller box taking commands from a microcomputer on an IEEE interface. The computer is also used to record and reduce the calibration data and control other calibration parameters. The full-scale load for this device is 1,000 milligrams; however, the concept can be extended to accommodate other calibration ranges.

  3. Characterizing thermal features from multi-spectral remote sensing data using dynamic calibration procedures

    NASA Astrophysics Data System (ADS)

    Hardy, Colin C.

    A thermal infrared remote sensing project was implemented to develop methods for identifying, classifying, and mapping thermal features. This study is directed at geothermal features, with the expectation that new protocols developed here will apply to the wildland fire thermal environment. Airborne multi-spectral digital imagery was acquired over the geothermally active Norris Basin region of Yellowstone National Park, USA. Two image acquisitions were flown, with one near solar noon and the other at night. The five-band image data included thermal infrared (TIR), near-infrared (NIR), and three visible bandpasses. While focused on TIR, the study relied on the multi-spectral visible and NIR data as well as on an ancillary hyperspectral data set. The raw, five-band data were uncalibrated, requiring implementation of two calibration protocols. First, a vicarious calibration procedure was developed to compute reflectance for the visible and NIR bands using an independently calibrated hyperspectral dataset. Second, a dynamic, in-scene calibration procedure was developed for the thermal sensor that exploited natural, pseudo-invariant thermal reference targets instrumented with kinetic temperature recorders. A suite of thermal attributes was derived, including daytime and nighttime radiant temperatures, a temperature difference (DeltaT), albedo, one minus albedo, and apparent thermal inertia (ATI). The albedo terms were computed using a published weighed-average albedo algorithm based on ratios of the narrowband red and NIR reflectances to total solar irradiance for the respective red and NIR bandpasses. In the absence of verifiable "truth," a step-wise chain of unsupervised classification and multivariate analysis exercises was performed, drawing heavily on "fuzzy truth." A final classification synthesizes a "thermal phenomenology" comprised of four components: spectral, statistical, geographical/contextual, and feature space. In situ measurements paired with image data

  4. Usual Dietary Intakes: SAS Macros for Fitting Multivariate Measurement Error Models & Estimating Multivariate Usual Intake Distributions

    Cancer.gov

    The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.

  5. Calibration validation revisited or how to make better use of available data: Sub-period calibration

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Hrachowitz, M.; Fenicia, F.; Savenije, H.

    2012-12-01

    Parameter identification of conceptual hydrological models depends largely on calibration, as model parameters are typically non-measurable quantities. For hydrological modeling the identification of "realistic" parameter sets is a key objective. As a model is intended to be used for prediction in future it is also crucial that the model parameters be time transposable. However, previous studies showed that the "best" parameter set can significantly vary over time. Instead of using the "best fit", this study introduces sub-period (SuPer) calibration as a new framework to identify the most "time consistent" parameterization, although potentially sub-optimal in the calibration period. The SuPer calibration framework includes two steps. First, the time series is split into different sub-periods, such as years or seasons. Then the model is calibrated separately for each sub-period and a Pareto front is obtained as the "best fit" for every sub-period. In the second step those parameter sets are selected that minimize the distance to the Pareto front of each sub-period, which involves an additional multi-objective optimization problem with dimensions equal to the number of sub-periods. The performance of the SuPer calibration framework is evaluated and compared with traditional calibration validation frameworks for two sub-period combinations: 1) Two consecutive years; and 2) Eight consecutive years, as sub-periods. For this evaluation we used the HyMOD model applied to the Wark catchment in the Grand Duchy of Luxembourg. We show that besides being a calibration framework, this approach has also diagnostic capabilities. It can in fact indicate the parameter sets that perform consistently well for all the sub-periods while it does not require subjective thresholds for defining behavioral parameter sets. It appears that SuPer calibration leads to feasible parameter ranges for the individual sub-periods which differ from parameter ranges defined by traditional model

  6. Targetless Camera Calibration

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Mussio, L.; Remondino, F.; Scaioni, M.

    2011-09-01

    In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  7. Automatic beamline calibration procedures

    SciTech Connect

    Corbett, W.J.; Lee, M.J.; Zambre, Y.

    1992-03-01

    Recent experience with the SLC and SPEAR accelerators have led to a well-defined set of procedures for calibration of the beamline model using the orbit fitting program, RESOLVE. Difference orbit analysis is used to calibrate quadrupole strengths, BPM sensitivities, corrector strengths, focusing effects from insertion devices, and to determine the source of dispersion and coupling errors. Absolute orbit analysis is used to locate quadrupole misalignments, BPM offsets, or beam loss. For light source applications, the photon beam source coordinates can be found. The result is an accurate model of the accelerator which can be used for machine control. In this paper, automatable beamline calibration procedures are outlined and illustrated with recent examples. 5 refs.

  8. Calibration Systems Final Report

    SciTech Connect

    Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.

    2006-02-01

    The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.

  9. Application of two tests of multivariate discordancy to fisheries data sets

    USGS Publications Warehouse

    Stapanian, M.A.; Kocovsky, P.M.; Garner, F.C.

    2008-01-01

    The generalized (Mahalanobis) distance and multivariate kurtosis are two powerful tests of multivariate discordancies (outliers). Unlike the generalized distance test, the multivariate kurtosis test has not been applied as a test of discordancy to fisheries data heretofore. We applied both tests, along with published algorithms for identifying suspected causal variable(s) of discordant observations, to two fisheries data sets from Lake Erie: total length, mass, and age from 1,234 burbot, Lota lota; and 22 combinations of unique subsets of 10 morphometrics taken from 119 yellow perch, Perca flavescens. For the burbot data set, the generalized distance test identified six discordant observations and the multivariate kurtosis test identified 24 discordant observations. In contrast with the multivariate tests, the univariate generalized distance test identified no discordancies when applied separately to each variable. Removing discordancies had a substantial effect on length-versus-mass regression equations. For 500-mm burbot, the percent difference in estimated mass after removing discordancies in our study was greater than the percent difference in masses estimated for burbot of the same length in lakes that differed substantially in productivity. The number of discordant yellow perch detected ranged from 0 to 2 with the multivariate generalized distance test and from 6 to 11 with the multivariate kurtosis test. With the kurtosis test, 108 yellow perch (90.7%) were identified as discordant in zero to two combinations, and five (4.2%) were identified as discordant in either all or 21 of the 22 combinations. The relationship among the variables included in each combination determined which variables were identified as causal. The generalized distance test identified between zero and six discordancies when applied separately to each variable. Removing the discordancies found in at least one-half of the combinations (k=5) had a marked effect on a principal components

  10. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  11. Autonomous Phase Retrieval Calibration

    NASA Technical Reports Server (NTRS)

    Estlin, Tara A.; Chien, Steve A.; Castano, Rebecca; Gaines, Daniel M.; Doubleday, Joshua R.; Schoolcraft, Josua B.; Oyake, Amalaye; Vaughs, Ashton G.; Torgerson, Jordan L.

    2011-01-01

    The Palomar Adaptive Optics System actively corrects for changing aberrations in light due to atmospheric turbulence. However, the underlying internal static error is unknown and uncorrected by this process. The dedicated wavefront sensor device necessarily lies along a different path than the science camera, and, therefore, doesn't measure the true errors along the path leading to the final detected imagery. This is a standard problem in adaptive optics (AO) called "non-common path error." The Autonomous Phase Retrieval Calibration (APRC) software suite performs automated sensing and correction iterations to calibrate the Palomar AO system to levels that were previously unreachable.

  12. In-Space Calibration of a Gyro Quadruplet

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2001-01-01

    This work presents a new approach to gyro calibration where, in addition to being used for computing attitude that is needed in the calibration process, the gyro outputs are also used as measurements in a Kalman filter. This work also presents an algorithm for calibrating a quadruplet rather than the customary triad gyro set. In particular, a new misalignment error model is derived for this case. The new calibration algorithm is applied to the EOS-AQUA satellite gyros. The effectiveness of the new algorithm is demonstrated through simulations.

  13. Quality Reporting of Multivariable Regression Models in Observational Studies

    PubMed Central

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M.

    2016-01-01

    Abstract Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE. Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model. The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0–30.3) of the articles and 18.5% (95% CI: 14.8–22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor. A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature. PMID:27196467

  14. Multi-application controls: Robust nonlinear multivariable aerospace controls applications

    NASA Technical Reports Server (NTRS)

    Enns, Dale F.; Bugajski, Daniel J.; Carter, John; Antoniewicz, Bob

    1994-01-01

    This viewgraph presentation describes the general methodology used to apply Honywell's Multi-Application Control (MACH) and the specific application to the F-18 High Angle-of-Attack Research Vehicle (HARV) including piloted simulation handling qualities evaluation. The general steps include insertion of modeling data for geometry and mass properties, aerodynamics, propulsion data and assumptions, requirements and specifications, e.g. definition of control variables, handling qualities, stability margins and statements for bandwidth, control power, priorities, position and rate limits. The specific steps include choice of independent variables for least squares fits to aerodynamic and propulsion data, modifications to the management of the controls with regard to integrator windup and actuation limiting and priorities, e.g. pitch priority over roll, and command limiting to prevent departures and/or undesirable inertial coupling or inability to recover to a stable trim condition. The HARV control problem is characterized by significant nonlinearities and multivariable interactions in the low speed, high angle-of-attack, high angular rate flight regime. Systematic approaches to the control of vehicle motions modeled with coupled nonlinear equations of motion have been developed. This paper will discuss the dynamic inversion approach which explicity accounts for nonlinearities in the control design. Multiple control effectors (including aerodynamic control surfaces and thrust vectoring control) and sensors are used to control the motions of the vehicles in several degrees-of-freedom. Several maneuvers will be used to illustrate performance of MACH in the high angle-of-attack flight regime. Analytical methods for assessing the robust performance of the multivariable control system in the presence of math modeling uncertainty, disturbances, and commands have reached a high level of maturity. The structured singular value (mu) frequency response methodology is presented

  15. Design of multivariable controllers for robot manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1986-01-01

    The paper presents a simple method for the design of linear multivariable controllers for multi-link robot manipulators. The control scheme consists of multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and achieves pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. The two controllers are independent of each other and are designed separately based on the linearized robot model and then integrated in the overall control scheme. The proposed scheme is simple and can be implemented for real-time control of robot manipulators.

  16. Application of Optimal Designs to Item Calibration

    PubMed Central

    Lu, Hung-Yi

    2014-01-01

    In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318

  17. Self-calibration for microrefractive lens measurements

    NASA Astrophysics Data System (ADS)

    Gardner, Neil W.; Davies, Angela D.

    2006-03-01

    The self-calibration test known as the random ball test (RBT) is adapted and applied to instrument calibration for measurements of microrefractive lens figure error. The RBT exploits the symmetry properties of a microsphere, resulting in a low-uncertainty estimate of the instrument biases. One hundred surface patches on a 1-mm-diam steel sphere are imaged by commercial instruments then averaged together in software to determine the instrument bias for a 500-µm radius of curvature test piece. The results show biases on the order of a few hundred nanometers peak-to-valley for a scanning white light interferometer and a Twyman-Green interferometer.

  18. Multivariate linear recurrences and power series division

    PubMed Central

    Hauser, Herwig; Koutschan, Christoph

    2012-01-01

    Bousquet-Mélou and Petkovšek investigated the generating functions of multivariate linear recurrences with constant coefficients. We will give a reinterpretation of their results by means of division theorems for formal power series, which clarifies the structural background and provides short, conceptual proofs. In addition, extending the division to the context of differential operators, the case of recurrences with polynomial coefficients can be treated in an analogous way. PMID:23482936

  19. The Evolution of Multivariate Maternal Effects

    PubMed Central

    Kuijper, Bram; Johnstone, Rufus A.; Townley, Stuart

    2014-01-01

    There is a growing interest in predicting the social and ecological contexts that favor the evolution of maternal effects. Most predictions focus, however, on maternal effects that affect only a single character, whereas the evolution of maternal effects is poorly understood in the presence of suites of interacting traits. To overcome this, we simulate the evolution of multivariate maternal effects (captured by the matrix M) in a fluctuating environment. We find that the rate of environmental fluctuations has a substantial effect on the properties of M: in slowly changing environments, offspring are selected to have a multivariate phenotype roughly similar to the maternal phenotype, so that M is characterized by positive dominant eigenvalues; by contrast, rapidly changing environments favor Ms with dominant eigenvalues that are negative, as offspring favor a phenotype which substantially differs from the maternal phenotype. Moreover, when fluctuating selection on one maternal character is temporally delayed relative to selection on other traits, we find a striking pattern of cross-trait maternal effects in which maternal characters influence not only the same character in offspring, but also other offspring characters. Additionally, when selection on one character contains more stochastic noise relative to selection on other traits, large cross-trait maternal effects evolve from those maternal traits that experience the smallest amounts of noise. The presence of these cross-trait maternal effects shows that individual maternal effects cannot be studied in isolation, and that their study in a multivariate context may provide important insights about the nature of past selection. Our results call for more studies that measure multivariate maternal effects in wild populations. PMID:24722346

  20. Multivariable PID Controller For Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Tarokh, Mahmoud

    1990-01-01

    Gains updated during operation to cope with changes in characteristics and loads. Conceptual multivariable controller for robotic manipulator includes proportional/derivative (PD) controller in inner feedback loop, and proportional/integral/derivative (PID) controller in outer feedback loop. PD controller places poles of transfer function (in Laplace-transform space) of control system for linearized mathematical model of dynamics of robot. PID controller tracks trajectory and decouples input and output.

  1. Simplified Linear Multivariable Control Of Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Simplified method developed to design control system that makes joints of robot follow reference trajectories. Generic design includes independent multivariable feedforward and feedback controllers. Feedforward controller based on inverse of linearized model of dynamics of robot and implements control law that contains only proportional and first and second derivatives of reference trajectories with respect to time. Feedback controller, which implements control law of proportional, first-derivative, and integral terms, makes tracking errors converge toward zero as time passes.

  2. Calibrated parametric medical ultrasound imaging.

    PubMed

    Valckx, F M; Thijsse, J M; van Geemen, A J; Rotteveel, J J; Mullaart, R

    2000-01-01

    The goal of this study was to develop a calibrated on-line technique to extract as much diagnostically-relevant information as possible from conventional video-format echograms. The final aim is to improve the diagnostic potentials of medical ultrasound. Video-output images were acquired by a frame grabber board incorporated in a multiprocessor workstation. Calibration images were obtained from a stable tissue-mimicking phantom with known acoustic characteristics. Using these images as reference, depth dependence of the gray level could fairly be corrected for the transducer performance characteristics, for the observer-dependent equipment settings and for attenuation in the examined tissues. Second-order statistical parameters still displayed some nonconsistent depth dependencies. The results obtained with two echoscanners for the same phantom were different; hence, an a posteriori normalization of clinical data with the phantom data is indicated. Prior to processing of clinical echograms,. the anatomical reflections and echoless voids were removed automatically. The final step in the preprocessing concerned the compensation of the overall attenuation in the tissue. A 'sliding window' processing was then applied to a region of interest (ROI) in the 'back-scan converted' images. A number of first and second order statistical texture parameters and acoustical parameters were estimated in each window and assigned to the central pixel. This procedure results in a set of new 'parametric' images of the ROI, which can be inserted in the original echogram (gray value, color) or presented as a color overlay. A clinical example is presented for illustrating the potentials of the developed technique. Depending on the choice of the parameters, four full resolution calibrated parametric images can be calculated and simultaneously displayed within 5 to 20 seconds. In conclusion, an on-line technique has been developed to estimate acoustic and texture parameters with a reduced

  3. Prediction of the Thickness of a Thin Paint Film by Applying a Modified Partial-Least-Squares-1 Method to Data Obtained in Terahertz Reflectometry

    NASA Astrophysics Data System (ADS)

    Iwata, Tetsuo; Yoshioka, Shuji; Nakamura, Shota; Mizutani, Yasuhiro; Yasui, Takeshi

    2013-10-01

    We applied a multivariate analysis method to time-domain (TD) data obtained in terahertz (THz) reflectometry for predicting the thickness of a single-layered paint film deposited on a metal substrate. For prediction purposes, we built a calibration model from TD-THz waveforms obtained from films of different thicknesses but the same kind. Because each TD-THz wave is approximate by the superposition of two echo pulses (one is reflected from the air-film boundary and the other from the film-substrate boundary), a difference in thickness gives a relative peak shift in time in the two echo pulses. Then, we predicted unknown thicknesses of the paint films by using the calibration model. Although any multivariate analysis method can be used, we proposed employing a modified partial-least-squares-1 (PLS1) method because it gives a superior calibration model in principle. The prediction procedure worked well for a moderately thin film (typically, several to several tens of micrometers) rather than a thicker one.

  4. Relative vs Absolute Antenna Calibrations: How, when, and why do they differ? A Comparison of Antenna Calibration Catalogs

    NASA Astrophysics Data System (ADS)

    Mader, G. L.; Bilich, A. L.

    2013-12-01

    Since 1994, NGS has computed relative antenna calibrations for more than 350 antenna models used by NGS customers and geodetic networks worldwide. In a 'relative' calibration, the antenna under test is calibrated relative to a standard reference antenna, the AOA D/M_T chokering. The majority of NGS calibrations have been made publicly available at the web site www.ngs.noaa.gov/ANTCAL as well as via the NGS master calibrations file ant_info.003. In the mid-2000's, institutions in Germany began distributing 'absolute' antenna calibrations, where the antenna under test is calibrated independent of any reference antenna. These calibration methods also overcame some limitations of relative calibrations by going to lower elevation angles and capturing azimuthal variations. Soon thereafter (2008), the International GNSS Service (IGS) initiated a geodetic community movement away from relative calibrations and toward absolute calibrations as the defacto standard. The IGS now distributes a catalog of absolute calibrations taken from several institutions, distributed as the IGS master calibrations file igs08.atx. The competing methods and files have raised many questions about when it is or is not valid to process a geodetic network using a combination of relative and absolute calibrations, and if/when it is valid to combine the NGS and IGS catalogs. Therefore, in this study, we compare the NGS catalog of relative calibrations against the IGS catalog of absolute calibrations. As of the writing of this abstract, there are 77 antenna+radome combinations which are common to both the NGS relative and IGS absolute catalogs, spanning 16 years of testing (1997 to present). 50 different antenna models and 8 manufacturers are represented in the study sample. We apply the widely-accepted standard method for converting relative to absolute, then difference the calibrations. Various statistics describe the observed differences between phase center offset (PCO), phase center variation

  5. Effect of contact stiffness on wedge calibration of lateral force in atomic force microscopy

    SciTech Connect

    Wang Fei; Zhao Xuezeng

    2007-04-15

    Quantitative friction measurement of nanomaterials in atomic force microscope requires accurate calibration method for lateral force. The effect of contact stiffness on lateral force calibration of atomic force microscope is discussed in detail and an improved calibration method is presented. The calibration factor derived from the original method increased with the applied normal load, which indicates that separate calibration should be required for every given applied normal load to keep the accuracy of friction measurement. We improve the original method by introducing the contact factor, which is derived from the contact stiffness between the tip and the sample, to the calculation of calibration factors. The improved method makes the calculation of calibration factors under different applied normal loads possible without repeating the calibration procedure. Comparative experiments on a silicon wafer have been done by both the two methods to validate the method in this article.

  6. Power of univariate and multivariate analyses of repeated measurements in controlled clinical trials.

    PubMed

    Overall, J E; Atlas, R S

    1999-04-01

    The power of univariate and multivariate tests of significance is compared in relation to linear and nonlinear patterns of treatment effects in a repeated measurement design. Bonferroni correction was used to control the experiment-wise error rate in combining results from univariate tests of significance accomplished separately on average level, linear, quadratic, and cubic trend components. Multivariate tests on these same components of the overall treatment effect, as well as a multivariate test for between-groups difference on the original repeated measurements, were also evaluated for power against the same representative patterns of treatment effects. Results emphasize the advantage of parsimony that is achieved by transforming multiple repeated measurements into a reduced set of mean ngful composite variables representing average levels and rates of change. The Bonferroni correction applied to the separate univariate tests provided experiment-wise protection against Type I error, produced slightly greater experiment-wise power than a multivariate test applied to the same components of the data patterns, and provided substantially greater power than a multivariate test on the complete set of original repeated measurements. The separate univariate tests provide interpretive advantage regarding locus of the treatment effects. PMID:10348408

  7. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  8. Aircraft measurement of electric field - Self-calibration

    NASA Technical Reports Server (NTRS)

    Winn, W. P.

    1993-01-01

    Aircraft measurement of electric fields is difficult as the electrically conducting surface of the aircraft distorts the electric field. Calibration requires determining the relations between the undistorted electric field in the absence of the vehicle and the signals from electric field meters that sense the local distorted fields in their immediate vicinity. This paper describes a generalization of a calibration method which uses pitch and roll maneuvers. The technique determines both the calibration coefficients and the direction of the electric vector. The calibration of individual electric field meters and the elimination of the aircraft's self-charge are described. Linear combinations of field mill signals are examined and absolute calibration and error analysis are discussed. The calibration method was applied to data obtained during a flight near thunderstorms.

  9. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  10. SAR calibration: A technology review

    NASA Technical Reports Server (NTRS)

    Larson, R. W.; Politis, D. T.; Shuchman, R. A.

    1983-01-01

    Various potential applications of amplitude-calibrated SAR systems are briefly described, along with an estimate of calibration performance requirements. A review of the basic SAR calibration problem is given. For background purposes and to establish consistent definition of terms, various conventional SAR performance parameters are reviewed along with three additional parameters which are directly related to calibrated SAR systems. Techniques for calibrating a SAR are described. Included in the results presented are: calibration philosophy and procedures; review of the calibration signal generator technology development with results describing both the development of instrumentation and internal calibration measurements for two SAR systems; summary of analysis and measurements required to determine optimum retroreflector design and configuration for use as a reference for the absolute calibration of a SAR system; and summary of techniques for in-flight measurements of SAR antenna response.

  11. Optical detector calibrator system

    NASA Technical Reports Server (NTRS)

    Strobel, James P. (Inventor); Moerk, John S. (Inventor); Youngquist, Robert C. (Inventor)

    1996-01-01

    An optical detector calibrator system simulates a source of optical radiation to which a detector to be calibrated is responsive. A light source selected to emit radiation in a range of wavelengths corresponding to the spectral signature of the source is disposed within a housing containing a microprocessor for controlling the light source and other system elements. An adjustable iris and a multiple aperture filter wheel are provided for controlling the intensity of radiation emitted from the housing by the light source to adjust the simulated distance between the light source and the detector to be calibrated. The geared iris has an aperture whose size is adjustable by means of a first stepper motor controlled by the microprocessor. The multiple aperture filter wheel contains neutral density filters of different attenuation levels which are selectively positioned in the path of the emitted radiation by a second stepper motor that is also controlled by the microprocessor. An operator can select a number of detector tests including range, maximum and minimum sensitivity, and basic functionality. During the range test, the geared iris and filter wheel are repeatedly adjusted by the microprocessor as necessary to simulate an incrementally increasing simulated source distance. A light source calibration subsystem is incorporated in the system which insures that the intensity of the light source is maintained at a constant level over time.

  12. NVLAP calibration laboratory program

    SciTech Connect

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  13. Calibration issues for MUSE

    NASA Astrophysics Data System (ADS)

    Kelz, Andreas; Roth, Martin; Bauer, Svend; Gerssen, Joris; Hahn, Thomas; Weilbacher, Peter; Laux, Uwe; Loupias, Magali; Kosmalski, Johan; McDermid, Richard; Bacon, Roland

    2008-07-01

    The Multi-Unit Spectroscopic Explorer (MUSE) is an integral-field spectrograph for the VLT for the next decade. Using an innovative field-splitting and slicing design, combined with an assembly of 24 spectrographs, MUSE will provide some 90,000 spectra in one exposure, which cover a simultaneous spectral range from 465 to 930nm. The design and manufacture of the Calibration Unit, the alignment tests of the Spectrograph and Detector sub-systems, and the development of the Data Reduction Software for MUSE are work-packages under the responsibility of the AIP, who is a partner in a European-wide consortium of 6 institutes and ESO, that is led by the Centre de Recherche Astronomique de Lyon. MUSE will be operated and therefore has to be calibrated in a variety of modes, which include seeing-limited and AO-assisted operations, providing a wide and narrow-field-of-view. MUSE aims to obtain unprecedented ultra-deep 3D-spectroscopic exposures, involving integration times of the order of 80 hours at the VLT. To achieve the corresponding science goals, instrumental stability, accurate calibration and adequate data reduction tools are needed. The paper describes the status at PDR of the AIP related work-packages, in particular with respect to the spatial, spectral, image quality, and geometrical calibration and related data reduction aspects.

  14. Pseudo Linear Gyro Calibration

    NASA Technical Reports Server (NTRS)

    Harman, Richard; Bar-Itzhack, Itzhack Y.

    2003-01-01

    Previous high fidelity onboard attitude algorithms estimated only the spacecraft attitude and gyro bias. The desire to promote spacecraft and ground autonomy and improvements in onboard computing power has spurred development of more sophisticated calibration algorithms. Namely, there is a desire to provide for sensor calibration through calibration parameter estimation onboard the spacecraft as well as autonomous estimation on the ground. Gyro calibration is a particularly challenging area of research. There are a variety of gyro devices available for any prospective mission ranging from inexpensive low fidelity gyros with potentially unstable scale factors to much more expensive extremely stable high fidelity units. Much research has been devoted to designing dedicated estimators such as particular Extended Kalman Filter (EKF) algorithms or Square Root Information Filters. This paper builds upon previous attitude, rate, and specialized gyro parameter estimation work performed with Pseudo Linear Kalman Filter (PSELIKA). The PSELIKA advantage is the use of the standard linear Kalman Filter algorithm. A PSELIKA algorithm for an orthogonal gyro set which includes estimates of attitude, rate, gyro misalignments, gyro scale factors, and gyro bias is developed and tested using simulated and flight data. The measurements PSELIKA uses include gyro and quaternion tracker data.

  15. Improved Regression Calibration

    ERIC Educational Resources Information Center

    Skrondal, Anders; Kuha, Jouni

    2012-01-01

    The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

  16. Computerized tomography calibrator

    NASA Technical Reports Server (NTRS)

    Engel, Herbert P. (Inventor)

    1991-01-01

    A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.

  17. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  18. Thermistor mount efficiency calibration

    SciTech Connect

    Cable, J.W.

    1980-05-01

    Thermistor mount efficiency calibration is accomplished by use of the power equation concept and by complex signal-ratio measurements. A comparison of thermistor mounts at microwave frequencies is made by mixing the reference and the reflected signals to produce a frequency at which the amplitude and phase difference may be readily measured.

  19. LOFAR Facet Calibration

    NASA Astrophysics Data System (ADS)

    van Weeren, R. J.; Williams, W. L.; Hardcastle, M. J.; Shimwell, T. W.; Rafferty, D. A.; Sabater, J.; Heald, G.; Sridhar, S. S.; Dijkema, T. J.; Brunetti, G.; Brüggen, M.; Andrade-Santos, F.; Ogrean, G. A.; Röttgering, H. J. A.; Dawson, W. A.; Forman, W. R.; de Gasperin, F.; Jones, C.; Miley, G. K.; Rudnick, L.; Sarazin, C. L.; Bonafede, A.; Best, P. N.; Bîrzan, L.; Cassano, R.; Chyży, K. T.; Croston, J. H.; Ensslin, T.; Ferrari, C.; Hoeft, M.; Horellou, C.; Jarvis, M. J.; Kraft, R. P.; Mevius, M.; Intema, H. T.; Murray, S. S.; Orrú, E.; Pizzo, R.; Simionescu, A.; Stroe, A.; van der Tol, S.; White, G. J.

    2016-03-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction-dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction-dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at ∼ 5\\prime\\prime resolution, meeting the specifications of the LOFAR Tier-1 northern survey.

  20. Calibration Of Oxygen Monitors

    NASA Technical Reports Server (NTRS)

    Zalenski, M. A.; Rowe, E. L.; Mcphee, J. R.

    1988-01-01

    Readings corrected for temperature, pressure, and humidity of air. Program for handheld computer developed to ensure accuracy of oxygen monitors in National Transonic Facility, where liquid nitrogen stored. Calibration values, determined daily, based on entries of data on barometric pressure, temperature, and relative humidity. Output provided directly in millivolts.

  1. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them. PMID:25679743

  2. Signal inference with unknown response: Calibration-uncertainty renormalized estimator

    NASA Astrophysics Data System (ADS)

    Dorn, Sebastian; Enßlin, Torsten A.; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  3. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  4. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  5. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  6. MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)

    EPA Science Inventory

    Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...

  7. Integrating Supplementary Application-Based Tutorials in the Multivariable Calculus Course

    ERIC Educational Resources Information Center

    Verner, I. M.; Aroshas, S.; Berman, A.

    2008-01-01

    This article presents a study in which applications were integrated in the Multivariable Calculus course at the Technion in the framework of supplementary tutorials. The purpose of the study was to test the opportunity of extending the conventional curriculum by optional applied problem-solving activities and get initial evidence on the possible…

  8. Inertial Sensor Error Reduction through Calibration and Sensor Fusion.

    PubMed

    Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  9. Inertial Sensor Error Reduction through Calibration and Sensor Fusion

    PubMed Central

    Lambrecht, Stefan; Nogueira, Samuel L.; Bortole, Magdo; Siqueira, Adriano A. G.; Terra, Marco H.; Rocon, Eduardo; Pons, José L.

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  10. Optimal model-free prediction from multivariate time series.

    PubMed

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation. PMID:26066231

  11. Fraud detection in medicare claims: A multivariate outlier detection approach

    SciTech Connect

    Burr, T.; Hale, C.; Kantor, M.

    1997-04-01

    We apply traditional and customized multivariate outlier detection methods to detect fraud in medicare claims. We use two sets of 11 derived features, and one set of the 22 combined features. The features are defined so that fraudulent medicare providers should tend to have larger features values than non-fraudulent providers. Therefore we have an apriori direction ({open_quotes}large values{close_quotes}) in high dimensional feature space to search for the multivariate outliers. We focus on three issues: (1) outlier masking (Example: the presence of one outlier can make it difficult to detect a second outlier), (2) the impact of having an apriori direction to search for fraud, and (3) how to compare our detection methods. Traditional methods include Mahalanobis distances, (with and without dimension reduction), k-nearest neighbor, and density estimation methods. Some methods attempt to mitigate the outlier masking problem (for example: minimum volume ellipsoid covariance estimator). Customized methods include ranking methods (such as Spearman rank ordering) that exploit the {open_quotes}large is suspicious{close_quotes} notion. No two methods agree completely which providers are most suspicious so we present ways to compare our methods. One comparison method uses a list of known-fraudulent providers. All comparison methods restrict attention to the most suspicious providers.

  12. A Multivariate Granger Causality Concept towards Full Brain Functional Connectivity

    PubMed Central

    Schmid-Hertel, Nicole; Witte, Herbert; Wismüller, Axel; Leistritz, Lutz

    2016-01-01

    Detecting changes of spatially high-resolution functional connectivity patterns in the brain is crucial for improving the fundamental understanding of brain function in both health and disease, yet still poses one of the biggest challenges in computational neuroscience. Currently, classical multivariate Granger Causality analyses of directed interactions between single process components in coupled systems are commonly restricted to spatially low- dimensional data, which requires a pre-selection or aggregation of time series as a preprocessing step. In this paper we propose a new fully multivariate Granger Causality approach with embedded dimension reduction that makes it possible to obtain a representation of functional connectivity for spatially high-dimensional data. The resulting functional connectivity networks may consist of several thousand vertices and thus contain more detailed information compared to connectivity networks obtained from approaches based on particular regions of interest. Our large scale Granger Causality approach is applied to synthetic and resting state fMRI data with a focus on how well network community structure, which represents a functional segmentation of the network, is preserved. It is demonstrated that a number of different community detection algorithms, which utilize a variety of algorithmic strategies and exploit topological features differently, reveal meaningful information on the underlying network module structure. PMID:27064897

  13. A Method for Comparing Multivariate Time Series with Different Dimensions

    PubMed Central

    Tapinos, Avraam; Mendes, Pedro

    2013-01-01

    In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554

  14. Modeling pharmacokinetic data using heavy-tailed multivariate distributions.

    PubMed

    Lindsey, J K; Jones, B

    2000-08-01

    Pharmacokinetic studies of drug and metabolite concentrations in the blood are usually conducted as crossover trials, especially in Phases I and II. A longitudinal series of measurements is collected on each subject within each period. Dependence among such observations, within and between periods, will generally be fairly complex, requiring two levels of variance components, for the subjects and for the periods within subjects, and an autocorrelation within periods as well as a time-varying variance. Until now, the standard way in which this has been modeled is using a multivariate normal distribution. Here, we introduce procedures for simultaneously handling these various types of dependence in a wider class of distributions called the multivariate power exponential and Student t families. They can have the heavy tails required for handling the extreme observations that may occur in such contexts. We also consider various forms of serial dependence among the observations and find that they provide more improvement to our models than do the variance components. An integrated Ornstein-Uhlenbeck (IOU) stochastic process fits much better to our data set than the conventional continuous first-order autoregression, CAR(1). We apply these models to a Phase I study of the drug, flosequinan, and its metabolite. PMID:10959917

  15. Escaping the Curse of Dimensionality in Estimating Multivariate Transfer Entropy

    NASA Astrophysics Data System (ADS)

    Runge, Jakob; Heitzig, Jobst; Petoukhov, Vladimir; Kurths, Jürgen

    2012-06-01

    Multivariate transfer entropy (TE) is a model-free approach to detect causalities in multivariate time series. It is able to distinguish direct from indirect causality and common drivers without assuming any underlying model. But despite these advantages it has mostly been applied in a bivariate setting as it is hard to estimate reliably in high dimensions since its definition involves infinite vectors. To overcome this limitation, we propose to embed TE into the framework of graphical models and present a formula that decomposes TE into a sum of finite-dimensional contributions that we call decomposed transfer entropy. Graphical models further provide a richer picture because they also yield the causal coupling delays. To estimate the graphical model we suggest an iterative algorithm, a modified version of the PC-algorithm with a very low estimation dimension. We present an appropriate significance test and demonstrate the method’s performance using examples of nonlinear stochastic delay-differential equations and observational climate data (sea level pressure).

  16. Escaping the curse of dimensionality in estimating multivariate transfer entropy.

    PubMed

    Runge, Jakob; Heitzig, Jobst; Petoukhov, Vladimir; Kurths, Jürgen

    2012-06-22

    Multivariate transfer entropy (TE) is a model-free approach to detect causalities in multivariate time series. It is able to distinguish direct from indirect causality and common drivers without assuming any underlying model. But despite these advantages it has mostly been applied in a bivariate setting as it is hard to estimate reliably in high dimensions since its definition involves infinite vectors. To overcome this limitation, we propose to embed TE into the framework of graphical models and present a formula that decomposes TE into a sum of finite-dimensional contributions that we call decomposed transfer entropy. Graphical models further provide a richer picture because they also yield the causal coupling delays. To estimate the graphical model we suggest an iterative algorithm, a modified version of the PC-algorithm with a very low estimation dimension. We present an appropriate significance test and demonstrate the method's performance using examples of nonlinear stochastic delay-differential equations and observational climate data (sea level pressure). PMID:23004667

  17. Diagonal dominance for the multivariable Nyquist array using function minimization

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1977-01-01

    A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.

  18. Simplified Vicarious Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Ryan, Robert; Holekamp, Kara; Pagnutti, Mary

    2010-01-01

    A measurement-based radiance estimation approach for vicarious radiometric calibration of spaceborne multispectral remote sensing systems has been developed. This simplified process eliminates the use of radiative transfer codes and reduces the number of atmospheric assumptions required to perform sensor calibrations. Like prior approaches, the simplified method involves the collection of ground truth data coincident with the overpass of the remote sensing system being calibrated, but this approach differs from the prior techniques in both the nature of the data collected and the manner in which the data are processed. In traditional vicarious radiometric calibration, ground truth data are gathered using ground-viewing spectroradiometers and one or more sun photometer( s), among other instruments, located at a ground target area. The measured data from the ground-based instruments are used in radiative transfer models to estimate the top-of-atmosphere (TOA) target radiances at the time of satellite overpass. These TOA radiances are compared with the satellite sensor readings to radiometrically calibrate the sensor. Traditional vicarious radiometric calibration methods require that an atmospheric model be defined such that the ground-based observations of solar transmission and diffuse-to-global ratios are in close agreement with the radiative transfer code estimation of these parameters. This process is labor-intensive and complex, and can be prone to errors. The errors can be compounded because of approximations in the model and inaccurate assumptions about the radiative coupling between the atmosphere and the terrain. The errors can increase the uncertainty of the TOA radiance estimates used to perform the radiometric calibration. In comparison, the simplified approach does not use atmospheric radiative transfer models and involves fewer assumptions concerning the radiative transfer properties of the atmosphere. This new technique uses two neighboring uniform

  19. Dilution standard addition calibration: A practical calibration strategy for multiresidue organic compounds determination.

    PubMed

    Martins, Manoel L; Rizzetti, Tiele M; Kemmerich, Magali; Saibt, Nathália; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato

    2016-08-19

    Among calibration approaches for organic compounds determination in complex matrices, external calibration, based in solutions of the analytes in solvent or in blank matrix extracts, is the most applied approach. Although matrix matched calibration (MMC) can compensates the matrix effects, it does not compensate low recovery results. In this way, standard addition (SA) and procedural standard calibration (PSC) are usual alternatives, despite they generate more sample and/or matrix blanks consumption need, extra sample preparations and higher time and costs. Thus, the goal of this work was to establish a fast and efficient calibration approach, the diluted standard addition calibration (DSAC), based on successive dilutions of a spiked blank sample. In order to evaluate the proposed approach, solvent calibration (SC), MMC, PSC and DSAC were applied to evaluate recovery results of grape blank samples spiked with 66 pesticides. Samples were extracted with the acetate QuEChERS method and the compounds determined by ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Results indicated that low recovery results for some pesticides were compensated by both PSC and DSAC approaches. Considering recoveries from 70 to 120% with RSD <20% as adequate, DSAC presented 83%, 98% and 100% of compounds meeting this criteria for the spiking levels 10, 50 and 100μgkg(-1), respectively. PSC presented same results (83%, 98% and 100%), better than those obtained by MMC (79%, 95% and 97%) and by SC (62%, 70% and 79%). The DSAC strategy showed to be suitable for calibration of multiresidue determination methods, producing adequate results in terms of trueness and is easier and faster to perform than other approaches. PMID:27432791

  20. Multichannel radiometer calibration: a new approach

    NASA Astrophysics Data System (ADS)

    Diaz, Susana; Booth, Charles R.; Armstrong, Roy; Brunat, Claudio; Cabrera, Sergio; Camilion, Carolina; Casiccia, Claudio; Deferrari, Guillermo; Fuenzalida, Humberto; Lovengreen, Charlotte; Paladini, Alejandro; Pedroni, Jorge; Rosales, Alejandro; Zagarese, Horacio; Vernet, Maria

    2005-09-01

    The error in irradiance measured with Sun-calibrated multichannel radiometers may be large when the solar zenith angle (SZA) increases. This could be particularly detrimental in radiometers installed at mid and high latitudes, where SZAs at noon are larger than 50° during part of the year. When a multiregressive methodology, including the total ozone column and SZA, was applied in the calculation of the calibration constant, an important improvement was observed. By combining two different equations, an improvement was obtained at almost all the SZAs in the calibration. An independent test that compared the irradiance of a multichannel instrument and a spectroradiometer installed in Ushuaia, Argentina, was used to confirm the results.

  1. Multichannel radiometer calibration: a new approach.

    PubMed

    Diaz, Susana; Booth, Charles R; Armstrong, Roy; Brunat, Claudio; Cabrera, Sergio; Camilion, Carolina; Casiccia, Claudio; Deferrari, Guillermo; Fuenzalida, Humberto; Lovengreen, Charlotte; Paladini, Alejandro; Pedroni, Jorge; Rosales, Alejandro; Zagarese, Horacio; Vernet, Maria

    2005-09-10

    The error in irradiance measured with Sun-calibrated multichannel radiometers may be large when the solar zenith angle (SZA) increases. This could be particularly detrimental in radiometers installed at mid and high latitudes, where SZAs at noon are larger than 50 degrees during part of the year. When a multiregressive methodology, including the total ozone column and SZA, was applied in the calculation of the calibration constant, an important improvement was observed. By combining two different equations, an improvement was obtained at almost all the SZAs in the calibration. An independent test that compared the irradiance of a multichannel instrument and a spectroradiometer installed in Ushuaia, Argentina, was used to confirm the results. PMID:16161648

  2. Mercury CEM Calibration

    SciTech Connect

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  3. Multivariate determination of hematocrit in whole blood by attenuated total reflection infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kostrewa, S.; Paarmann, Ch.; Goemann, W.; Heise, H. M.

    1998-06-01

    A spectral analysis of whole blood was undertaken in the mid-infrared spectral range by using the attenuated total reflection technique. The reference hematocrit values of 109 blood samples were measured after centrifugation with a range between 30% and 50%. Multivariate calibration with the partial least-squares (PLS) algorithm was performed using baseline corrected absorbance spectra between 1600 and 1200 cm-1. The relative prediction error achieved was 2.7% based on average hematocrit values. The performance is comparable to that using centrifugation or conductivity measurements. The spectral effects from protein adsorption onto the ATR-crystal, as well as erythrocyte sedimentation have been investigated.

  4. Variable Acceleration Force Calibration System (VACS)

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.; Johnson, Thomas H.; Landman, Drew

    2014-01-01

    Conventionally, force balances have been calibrated manually, using a complex system of free hanging precision weights, bell cranks, and/or other mechanical components. Conventional methods may provide sufficient accuracy in some instances, but are often quite complex and labor-intensive, requiring three to four man-weeks to complete each full calibration. To ensure accuracy, gravity-based loading is typically utilized. However, this often causes difficulty when applying loads in three simultaneous, orthogonal axes. A complex system of levers, cranks, and cables must be used, introducing increased sources of systematic error, and significantly increasing the time and labor intensity required to complete the calibration. One aspect of the VACS is a method wherein the mass utilized for calibration is held constant, and the acceleration is changed to thereby generate relatively large forces with relatively small test masses. Multiple forces can be applied to a force balance without changing the test mass, and dynamic forces can be applied by rotation or oscillating acceleration. If rotational motion is utilized, a mass is rigidly attached to a force balance, and the mass is exposed to a rotational field. A large force can be applied by utilizing a large rotational velocity. A centrifuge or rotating table can be used to create the rotational field, and fixtures can be utilized to position the force balance. The acceleration may also be linear. For example, a table that moves linearly and accelerates in a sinusoidal manner may also be utilized. The test mass does not have to move in a path that is parallel to the ground, and no re-leveling is therefore required. Balance deflection corrections may be applied passively by monitoring the orientation of the force balance with a three-axis accelerometer package. Deflections are measured during each test run, and adjustments with respect to the true applied load can be made during the post-processing stage. This paper will

  5. Linear models of coregionalization for multivariate lattice data: a general framework for coregionalized multivariate CAR models.

    PubMed

    MacNab, Ying C

    2016-09-20

    We present a general coregionalization framework for developing coregionalized multivariate Gaussian conditional autoregressive (cMCAR) models for Bayesian analysis of multivariate lattice data in general and multivariate disease mapping data in particular. This framework is inclusive of cMCARs that facilitate flexible modelling of spatially structured symmetric or asymmetric cross-variable local interactions, allowing a wide range of separable or non-separable covariance structures, and symmetric or asymmetric cross-covariances, to be modelled. We present a brief overview of established univariate Gaussian conditional autoregressive (CAR) models for univariate lattice data and develop coregionalized multivariate extensions. Classes of cMCARs are presented by formulating precision structures. The resulting conditional properties of the multivariate spatial models are established, which cast new light on cMCARs with richly structured covariances and cross-covariances of different spatial ranges. The related methods are illustrated via an in-depth Bayesian analysis of a Minnesota county-level cancer data set. We also bring a new dimension to the traditional enterprize of Bayesian disease mapping: estimating and mapping covariances and cross-covariances of the underlying disease risks. Maps of covariances and cross-covariances bring to light spatial characterizations of the cMCARs and inform on spatial risk associations between areas and diseases. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27091685

  6. Multi-cameras calibration from spherical targets

    NASA Astrophysics Data System (ADS)

    Zhao, Chengyun; Zhang, Jin; Deng, Huaxia; Yu, Liandong

    2016-01-01

    Multi-cameras calibration using spheres is more convenient than using planar target because it has an obvious advantage in imaging in different angles. The internal and external parameters of multi-cameras can be obtained through once calibrat ion. In this paper, a novel mult i-cameras calibration method is proposed based on multiple spheres. A calibration target with fixed multiple balls is applied in this method and the geometric propert ies of the sphere projection model will be analyzed. During the experiment, the spherical target is placed in the public field of mult i-cameras system. Then the corresponding data can be stored when the cameras are triggered by signal generator. The contours of the balls are detected by Hough transform and the center coordinates are determined with sub-pixel accuracy. Then the center coordinates are used as input information for calibrat ion and the internal as well as external parameters can be calculated by Zhang's theory. When mult iple cameras are calibrated simultaneously from different angles using multiple spheres, the center coordinates of each sphere can be determined accurately even the target images taken out of focus. So this method can improve the calibration precision. Meanwhile, Zhang's plane template method is added to the contrast calibrat ion experiment. And the error sources of the experiment are analyzed. The results indicate that the method proposed in this paper is suitable for mult i-cameras calibrat ion.

  7. PERSONALISED BODY COUNTER CALIBRATION USING ANTHROPOMETRIC PARAMETERS.

    PubMed

    Pölz, S; Breustedt, B

    2016-09-01

    Current calibration methods for body counting offer personalisation for lung counting predominantly with respect to ratios of body mass and height. Chest wall thickness is used as an intermediate parameter. This work revises and extends these methods using a series of computational phantoms derived from medical imaging data in combination with radiation transport simulation and statistical analysis. As an example, the method is applied to the calibration of the In Vivo Measurement Laboratory (IVM) at Karlsruhe Institute of Technology (KIT) comprising four high-purity germanium detectors in two partial body measurement set-ups. The Monte Carlo N-Particle (MCNP) transport code and the Extended Cardiac-Torso (XCAT) phantom series have been used. Analysis of the computed sample data consisting of 18 anthropometric parameters and calibration factors generated from 26 photon sources for each of the 30 phantoms reveals the significance of those parameters required for producing an accurate estimate of the calibration function. Body circumferences related to the source location perform best in the example, while parameters related to body mass show comparable but lower performances, and those related to body height and other lengths exhibit low performances. In conclusion, it is possible to give more accurate estimates of calibration factors using this proposed approach including estimates of uncertainties related to interindividual anatomical variation of the target population. PMID:26396263

  8. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  9. Mercury Calibration System

    SciTech Connect

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on

  10. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  11. Application of glyph-based techniques for multivariate engineering visualization

    NASA Astrophysics Data System (ADS)

    Glazar, Vladimir; Marunic, Gordana; Percic, Marko; Butkovic, Zlatko

    2016-01-01

    This article presents a review of glyph-based techniques for engineering visualization as well as practical application for the multivariate visualization process. Two glyph techniques, Chernoff faces and star glyphs, uncommonly used in engineering practice, are described, applied to the selected data set, run through the chosen optimization methods and user evaluated. As an example of how these techniques function, a set of data for the optimization of a heat exchanger with a microchannel coil is adopted for visualization. The results acquired by the chosen visualization techniques are related to the results of optimization carried out by the response surface method and compared with the results of user evaluation. Based on the data set from engineering research and practice, the advantages and disadvantages of these techniques for engineering visualization are identified and discussed.

  12. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  13. A multivariate analysis approach for the Imaging Atmospheric Cerenkov Telescopes System H.E.S.S

    SciTech Connect

    Dubois, F.; Lamanna, G.

    2008-12-24

    We present a multivariate classification approach applied to the analysis of data from the H.E.S.S. Very High Energy (VHE){gamma}-ray IACT stereoscopic system. This approach combines three complementary analysis methods already successfully applied in the H.E.S.S. data analysis. The proposed approach, with the combined effective estimator X{sub eff}, is conceived to improve the signal-to-background ratio and therefore particularly relevant to the morphological studies of faint extended sources.

  14. Multivariate Analysis of Genotype-Phenotype Association.

    PubMed

    Mitteroecker, Philipp; Cheverud, James M; Pavlicev, Mihaela

    2016-04-01

    With the advent of modern imaging and measurement technology, complex phenotypes are increasingly represented by large numbers of measurements, which may not bear biological meaning one by one. For such multivariate phenotypes, studying the pairwise associations between all measurements and all alleles is highly inefficient and prevents insight into the genetic pattern underlying the observed phenotypes. We present a new method for identifying patterns of allelic variation (genetic latent variables) that are maximally associated-in terms of effect size-with patterns of phenotypic variation (phenotypic latent variables). This multivariate genotype-phenotype mapping (MGP) separates phenotypic features under strong genetic control from less genetically determined features and thus permits an analysis of the multivariate structure of genotype-phenotype association, including its dimensionality and the clustering of genetic and phenotypic variables within this association. Different variants of MGP maximize different measures of genotype-phenotype association: genetic effect, genetic variance, or heritability. In an application to a mouse sample, scored for 353 SNPs and 11 phenotypic traits, the first dimension of genetic and phenotypic latent variables accounted for >70% of genetic variation present in all 11 measurements; 43% of variation in this phenotypic pattern was explained by the corresponding genetic latent variable. The first three dimensions together sufficed to account for almost 90% of genetic variation in the measurements and for all the interpretable genotype-phenotype association. Each dimension can be tested as a whole against the hypothesis of no association, thereby reducing the number of statistical tests from 7766 to 3-the maximal number of meaningful independent tests. Important alleles can be selected based on their effect size (additive or nonadditive effect on the phenotypic latent variable). This low dimensionality of the genotype-phenotype map

  15. Time varying, multivariate volume data reduction

    SciTech Connect

    Ahrens, James P; Fout, Nathaniel; Ma, Kwan - Liu

    2010-01-01

    Large-scale supercomputing is revolutionizing the way science is conducted. A growing challenge, however, is understanding the massive quantities of data produced by large-scale simulations. The data, typically time-varying, multivariate, and volumetric, can occupy from hundreds of gigabytes to several terabytes of storage space. Transferring and processing volume data of such sizes is prohibitively expensive and resource intensive. Although it may not be possible to entirely alleviate these problems, data compression should be considered as part of a viable solution, especially when the primary means of data analysis is volume rendering. In this paper we present our study of multivariate compression, which exploits correlations among related variables, for volume rendering. Two configurations for multidimensional compression based on vector quantization are examined. We emphasize quality reconstruction and interactive rendering, which leads us to a solution using graphics hardware to perform on-the-fly decompression during rendering. In this paper we present a solution which addresses the need for data reduction in large supercomputing environments where data resulting from simulations occupies tremendous amounts of storage. Our solution employs a lossy encoding scheme to acrueve data reduction with several options in terms of rate-distortion behavior. We focus on encoding of multiple variables together, with optional compression in space and time. The compressed volumes can be rendered directly with commodity graphics cards at interactive frame rates and rendering quality similar to that of static volume renderers. Compression results using a multivariate time-varying data set indicate that encoding multiple variables results in acceptable performance in the case of spatial and temporal encoding as compared to independent compression of variables. The relative performance of spatial vs. temporal compression is data dependent, although temporal compression has the

  16. Laser-induced breakdown spectroscopy and multivariate statistics for the rapid identification of oxide inclusions in steel products

    NASA Astrophysics Data System (ADS)

    Boué-Bigne, Fabienne

    2016-05-01

    Laser induced breakdown spectroscopy (LIBS) scanning measurements can generally be used to detect the presence of non-metallic inclusions in steel samples. However, the inexistence of appropriate standards to calibrate the LIBS instrument signal means that its application is limited to identifying simple diatomic inclusions and inclusions that are chemically fully distinct from one another. Oxide inclusions in steel products have varied and complex chemical content, with an approximate size of interest of 1 μm. Several oxide inclusions types have chemical elements in common, but it is the concentration of these elements that makes an inclusion type have little or, on the contrary, deleterious impact on the final steel product quality. During the LIBS measurement of such inclusions, the spectroscopic signal is influenced not only by the inclusions' chemical concentrations but also by their varying size and associated laser ablation matrix effects. To address the complexity of calibrating the LIBS instrument signal for identifying such inclusion species, a new approach was developed where a calibration dataset was created, combining the elemental concentrations of typical oxide inclusions with the associated LIBS signal, in order to define a multivariate discriminant function capable of identifying oxide inclusions from LIBS data obtained from the measurement of unknown samples. The new method was applied to a variety of steel product samples. Inclusions populations consisting of mixtures of several complex oxides, with overlapping chemical content and size ranging typically from 1 to 5 μm, were identified and correlated well with validation data. The ability to identify complex inclusion types from LIBS data could open the way to new applications as, for a given sample area, the LIBS measurement is performed in a fraction of the time required by scanning electron microscopy, which is the conventional technique used for inclusion characterisation in steel

  17. Multivariate tests for trend in water quality

    NASA Astrophysics Data System (ADS)

    Loftis, Jim C.; Taylor, Charles H.; Chapman, Phillip L.

    1991-07-01

    Several methods of testing for multivariate trend have been discussed in the statistical and water quality literature. We review both parametric and nonparametric approaches and compare their performance using, synthetic data. A new method, based on a robust estimation and testing approach suggested by Sen and Puri, performed very well for serially independent observations. A modified version of the covariance inversion approach presented by Dietz and Killeen also performed well for serially independent observations. For serially correlated observations, the covariance eigenvalue method suggested by Lettenmaier was the best performer.

  18. Multivariate curve-fitting in GAUSS

    USGS Publications Warehouse

    Bunck, C.M.; Pendleton, G.W.

    1988-01-01

    Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.

  19. New multivariable capabilities of the INCA program

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1989-01-01

    The INteractive Controls Analysis (INCA) program was developed at NASA's Goddard Space Flight Center to provide a user friendly, efficient environment for the design and analysis of control systems, specifically spacecraft control systems. Since its inception, INCA has found extensive use in the design, development, and analysis of control systems for spacecraft, instruments, robotics, and pointing systems. The (INCA) program was initially developed as a comprehensive classical design analysis tool for small and large order control systems. The latest version of INCA, expected to be released in February of 1990, was expanded to include the capability to perform multivariable controls analysis and design.

  20. Algorithms for computing the multivariable stability margin

    NASA Technical Reports Server (NTRS)

    Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.

    1989-01-01

    Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.

  1. Bayesian Transformation Models for Multivariate Survival Data

    PubMed Central

    DE CASTRO, MÁRIO; CHEN, MING-HUI; IBRAHIM, JOSEPH G.; KLEIN, JOHN P.

    2014-01-01

    In this paper we propose a general class of gamma frailty transformation models for multivariate survival data. The transformation class includes the commonly used proportional hazards and proportional odds models. The proposed class also includes a family of cure rate models. Under an improper prior for the parameters, we establish propriety of the posterior distribution. A novel Gibbs sampling algorithm is developed for sampling from the observed data posterior distribution. A simulation study is conducted to examine the properties of the proposed methodology. An application to a data set from a cord blood transplantation study is also reported. PMID:24904194

  2. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  3. A force calibration standard for magnetic tweezers

    NASA Astrophysics Data System (ADS)

    Yu, Zhongbo; Dulin, David; Cnossen, Jelmer; Köber, Mariana; van Oene, Maarten M.; Ordu, Orkide; Berghuis, Bojk A.; Hensgens, Toivo; Lipfert, Jan; Dekker, Nynke H.

    2014-12-01

    To study the behavior of biological macromolecules and enzymatic reactions under force, advances in single-molecule force spectroscopy have proven instrumental. Magnetic tweezers form one of the most powerful of these techniques, due to their overall simplicity, non-invasive character, potential for high throughput measurements, and large force range. Drawbacks of magnetic tweezers, however, are that accurate determination of the applied forces can be challenging for short biomolecules at high forces and very time-consuming for long tethers at low forces below ˜1 piconewton. Here, we address these drawbacks by presenting a calibration standard for magnetic tweezers consisting of measured forces for four magnet configurations. Each such configuration is calibrated for two commonly employed commercially available magnetic microspheres. We calculate forces in both time and spectral domains by analyzing bead fluctuations. The resulting calibration curves, validated through the use of different algorithms that yield close agreement in their determination of the applied forces, span a range from 100 piconewtons down to tens of femtonewtons. These generalized force calibrations will serve as a convenient resource for magnetic tweezers users and diminish variations between different experimental configurations or laboratories.

  4. Phase calibration generator

    NASA Technical Reports Server (NTRS)

    Sigman, E. H.

    1988-01-01

    A phase calibration system was developed for the Deep Space Stations to generate reference microwave comb tones which are mixed in with signals received by the antenna. These reference tones are used to remove drifts of the station's receiving system from the detected data. This phase calibration system includes a cable stabilizer which transfers a 20 MHz reference signal from the control room to the antenna cone. The cable stabilizer compensates for delay changes in the long cable which connects its control room subassembly to its antenna cone subassembly in such a way that the 20 MHz is transferred to the cone with no significant degradation of the hydrogen maser atomic clock stability. The 20 MHz reference is used by the comb generator and is also available for use as a reference for receiver LO's in the cone.

  5. Pipeline Calibration for STIS

    NASA Astrophysics Data System (ADS)

    Hodge, P. E.; Hulbert, S. J.; Lindler, D.; Busko, I.; Hsu, J.-C.; Baum, S.; McGrath, M.; Goudfrooij, P.; Shaw, R.; Katsanis, R.; Keener, S.; Bohlin, R.

    The CALSTIS program for calibration of Space Telescope Imaging Spectrograph data in the OPUS pipeline differs in several significant ways from calibration for earlier HST instruments, such as the use of FITS format, computation of error estimates, and association of related exposures. Several steps are now done in the pipeline that previously had to be done off-line by the user, such as cosmic ray rejection and extraction of 1-D spectra. Although the program is linked with IRAF for image and table I/O, it is written in ANSI C rather than SPP, which should make the code more accessible. FITS extension I/O makes use of the new IRAF FITS kernel for images and the HEASARC FITSIO package for tables.

  6. Calibrated vapor generator source

    DOEpatents

    Davies, J.P.; Larson, R.A.; Goodrich, L.D.; Hall, H.J.; Stoddard, B.D.; Davis, S.G.; Kaser, T.G.; Conrad, F.J.

    1995-09-26

    A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet. 10 figs.

  7. Calibrated vapor generator source

    DOEpatents

    Davies, John P.; Larson, Ronald A.; Goodrich, Lorenzo D.; Hall, Harold J.; Stoddard, Billy D.; Davis, Sean G.; Kaser, Timothy G.; Conrad, Frank J.

    1995-01-01

    A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet.

  8. Faint Object Spectrograph (FOS) calibration

    NASA Technical Reports Server (NTRS)

    Harms, R. J.; Beaver, E. A.; Burbidge, E. M.; Angel, J. R. P.; Bartko, F.; Mccoy, J.; Ripp, L.; Bohlin, R.; Davidsen, A. F.; Ford, H.

    1982-01-01

    The Faint Object Spectrograph (FOS) designed for use with The Space Telescope (ST), is currently preparing for instrument assembly, integration, alignment, and calibration. Nearly all optical and detector elements have been completed and calibrated, and selection of flight detectors and all but a few optical elements has been made. Calibration results for the flight detectors and optics are presented, and plans for forthcoming system calibration are briefly described.

  9. Fast calibration of gas flowmeters

    NASA Technical Reports Server (NTRS)

    Lisle, R. V.; Wilson, T. L.

    1981-01-01

    Digital unit automates calibration sequence using calculator IC and programmable read-only memory to solve calibration equations. Infrared sensors start and stop calibration sequence. Instrument calibrates mass flowmeters or rotameters where flow measurement is based on mass or volume. This automatic control reduces operator time by 80 percent. Solid-state components are very reliable, and digital character allows system accuracy to be determined primarily by accuracy of transducers.

  10. Calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lorefice, Salvatore; Malengo, Andrea

    2006-10-01

    After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.

  11. Program Calibrates Strain Gauges

    NASA Technical Reports Server (NTRS)

    Okazaki, Gary D.

    1991-01-01

    Program dramatically reduces personnel and time requirements for acceptance tests of hardware. Data-acquisition system reads output from Wheatstone full-bridge strain-gauge circuit and calculates strain by use of shunt calibration technique. Program nearly instantaneously tabulates and plots strain data against load-cell outputs. Modified to acquire strain data for other specimens wherever full-bridge strain-gauge circuits used. Written in HP BASIC.

  12. Calibration Facilities for NIF

    SciTech Connect

    Perry, T.S.

    2000-06-15

    The calibration facilities will be dynamic and will change to meet the needs of experiments. Small sources, such as the Manson Source should be available to everyone at any time. Carrying out experiments at Omega is providing ample opportunity for practice in pre-shot preparation. Hopefully, the needs that are demonstrated in these experiments will assure the development of (or keep in service) facilities at each of the laboratories that will be essential for in-house preparation for experiments at NIF.

  13. Calibrated Properties Model

    SciTech Connect

    H. H. Liu

    2003-02-14

    This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.

  14. [Quality control dose calibrators].

    PubMed

    Montoza Aguado, M; Delgado García, A; Ramírez Navarro, A; Salgado García, C; Muros de Fuentes, M A; Ortega Lozano, S; Bellón Guardia, M E; Llamas Elvira, J M

    2004-01-01

    We have reviewed the legislation about the quality control of dose calibrator. The importance of verifying the correct work of these instruments, is fundamental in daily practice of radiopharmacy and nuclear medicine. The Spanish legislation establishes to include these controls as part of the quality control of radiopharmaceuticals, and the program of quality assurance in nuclear medicine. We have reviewed guides and protocols from international eminent organizations, summarizing the recommended tests and periodicity of them. PMID:15625064

  15. Mesoscale hybrid calibration artifact

    DOEpatents

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  16. Optical Calibration of SNO+

    NASA Astrophysics Data System (ADS)

    Maneira, J.; Peeters, S.; Sinclair, J.

    2015-04-01

    SNO is being upgraded to SNO+, which has as its main goal the search for neutrinoless double-beta decay. The upgrade is defined by filling with a novel scintillator mixture containing 130Te. With a lower energy threshold than SNO, SNO+ will be sensitive to other exciting new physics. Here we are describing new optical calibration system that meets new, more stringent radiopurity requirements has been developed.

  17. F100 Multivariable Control Synthesis Program. Computer Implementation of the F100 Multivariable Control Algorithm

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.

    1983-01-01

    As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.

  18. Multivariate Gene-Based Association Test on Family Data in MGAS.

    PubMed

    Vroom, César-Reyer; Posthuma, Danielle; Li, Miao-Xin; Dolan, Conor V; van der Sluis, Sophie

    2016-09-01

    In analyses of unrelated individuals, the program multivariate gene-based association test by extended Simes (MGAS), which facilitates multivariate gene-based association testing, was shown to have correct Type I error rate and superior statistical power compared to other multivariate gene-based approaches. Here we show, through simulation, that MGAS can also be applied to data including genetically related subjects (e.g., family data), by using p value information obtained in Plink or in generalized estimating equations (with the 'exchangeable' working correlation matrix), both of which account for the family structure on a univariate single nucleotide polymorphism-based level by applying a sandwich correction of standard errors. We show that when applied to family-data, MGAS has correct Type I error rate, and given the details of the simulation setup, adequate power. Application of MGAS to seven eye measurement phenotypes showed statistically significant association with two genes that were not discovered in previous univariate analyses of a composite score. We conclude that MGAS is a useful and convenient tool for multivariate gene-based genome-wide association analysis in both unrelated and related individuals. PMID:27048268

  19. Radiation calibration targets

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Several prominent features of Mars Pathfinder and surrounding terrain are seen in this image, taken by the Imager for Mars Pathfinder on July 4 (Sol 1), the spacecraft's first day on the Red Planet. Portions of a lander petal are at the lower part of the image. At the left, the mechanism for the high-gain antenna can be seen. The dark area along the right side of the image represents a portion of the low-gain antenna. The radiation calibration target is at the right. The calibration target is made up of a number of materials with well-characterized colors. The known colors of the calibration targets allow scientists to determine the true colors of the rocks and soils of Mars. Three bull's-eye rings provide a wide range of brightness for the camera, similar to a photographer's grayscale chart. In the middle of the bull's-eye is a 5-inch tall post that casts a shadow, which is distorted in this image due to its location with respect to the lander camera.

    A large rock is located at the near center of the image. Smaller rocks and areas of soil are strewn across the Martian terrain up to the horizon line.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C.

  20. Calibrated Properties Model

    SciTech Connect

    T. Ghezzehej

    2004-10-04

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.

  1. A joined multi-metric calibration of river discharge and nitrate loads with different performance measures

    NASA Astrophysics Data System (ADS)

    Haas, Marcelo B.; Guse, Björn; Pfannerstill, Matthias; Fohrer, Nicola

    2016-05-01

    Hydrological models are useful tools to investigate hydrology and water quality in catchments. The calibration of these models is a crucial step to adapt the model to the catchment conditions, allowing effective simulations of environmental processes. In the model calibration, different performance measures need to be considered to represent different hydrology and water quality conditions in combination. This study presents a joined multi-metric calibration of discharge and nitrate loads simulated with the ecohydrological model SWAT. For this purpose, a calibration approach based on flow duration curves (FDC) is advanced by also considering nitrate duration curves (NDC). Five segments of FDCs and of NDCs are evaluated separately to consider the different phases of hydrograph and nitrograph. To consider both magnitude and dynamics in river discharge and nitrate loads, the Kling-Gupta Efficiency (KGE) is used additionally as a statistical performance metric to achieve a joined multi-variable calibration. The results show that a separate assessment of five different magnitudes improves the calibrated nitrate loads. Subsequently, adequate model runs with good performance for different hydrological conditions both for discharge and nitrate are detected in a joined approach based on FDC, NDC, and KGE. In that manner, plausible results were obtained for discharge and nitrate loads in the same model run. Using a multi-metric performance approach, the simultaneous multi-variable calibration led to a balanced model result for all magnitudes of discharge and nitrate loads.

  2. Dynamic Calibration of Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Hess, R. W.; Davis, W. T.; Davis, P. A.

    1985-01-01

    Sinusoidal calibration signal produced in 4- to 100-Hz range. Portable oscillating-pressure device measures dynamic characteristics of pressure transducers installed in models or aircraft at frequency and oscillating-pressure ranges encountered during unsteady-pressure-measurement tests. Calibration is over range of frequencies and amplitudes not available with commercial acoustic calibration devices.

  3. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images

    PubMed Central

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-01-01

    This study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model.

  4. Network structure of multivariate time series.

    PubMed

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-01-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040

  5. Network structure of multivariate time series

    PubMed Central

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-01-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040

  6. Fast Multivariate Search on Large Aviation Datasets

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Zhu, Qiang; Oza, Nikunj C.; Srivastava, Ashok N.

    2010-01-01

    Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which can contain up to several gigabytes of data. Surprisingly, research on MTS search is very limited. Most existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two provably correct algorithms to solve this problem (1) an R-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences, and (2) a List Based Search (LBS) algorithm which uses sorted lists for indexing. We demonstrate the performance of these algorithms using two large MTS databases from the aviation domain, each containing several millions of observations Both these tests show that our algorithms have very high prune rates (>95%) thus needing actual

  7. A semiparametric multivariate and multisite weather generator

    NASA Astrophysics Data System (ADS)

    Apipattanavis, Somkiat; Podestá, Guillermo; Rajagopalan, Balaji; Katz, Richard W.

    2007-11-01

    We propose a semiparametric multivariate weather generator with greater ability to reproduce the historical statistics, especially the wet and dry spells. The proposed approach has two steps: (1) a Markov Chain for generating the precipitation state (i.e., no rain, rain, or heavy rain), and (2) a k-nearest neighbor (k-NN) bootstrap resampler for generating the multivariate weather variables. The Markov Chain captures the spell statistics while the k-NN bootstrap captures the distributional and lag-dependence statistics of the weather variables. Traditional k-NN generators tend to under-simulate the wet and dry spells that are keys to watershed and agricultural modeling for water planning and management; hence the motivation for this research. We demonstrate the utility of the proposed approach and its improvement over the traditional k-NN approach through an application to daily weather data from Pergamino in the Pampas region of Argentina. We show the applicability of the proposed framework in simulating weather scenarios conditional on the seasonal climate forecast and also at multiple sites in the Pampas region.

  8. Network structure of multivariate time series

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  9. Internet-based calibration of a multifunction calibrator

    SciTech Connect

    BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK

    2000-04-17

    A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multifunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.

  10. Internet-Based Calibration of a Multifunction Calibrator

    SciTech Connect

    BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK

    2000-12-19

    A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multijunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.

  11. Sensitivity, Prediction Uncertainty, and Detection Limit for Artificial Neural Network Calibrations.

    PubMed

    Allegrini, Franco; Olivieri, Alejandro C

    2016-08-01

    With the proliferation of multivariate calibration methods based on artificial neural networks, expressions for the estimation of figures of merit such as sensitivity, prediction uncertainty, and detection limit are urgently needed. This would bring nonlinear multivariate calibration methodologies to the same status as the linear counterparts in terms of comparability. Currently only the average prediction error or the ratio of performance to deviation for a test sample set is employed to characterize and promote neural network calibrations. It is clear that additional information is required. We report for the first time expressions that easily allow one to compute three relevant figures: (1) the sensitivity, which turns out to be sample-dependent, as expected, (2) the prediction uncertainty, and (3) the detection limit. The approach resembles that employed for linear multivariate calibration, i.e., partial least-squares regression, specifically adapted to neural network calibration scenarios. As usual, both simulated and real (near-infrared) spectral data sets serve to illustrate the proposal. PMID:27363813

  12. Multivariate NIR spectroscopy models for moisture, ash and calorific content in biofuels using bi-orthogonal partial least squares regression.

    PubMed

    Lestander, Torbjörn A; Rhén, Christofer

    2005-08-01

    The multitude of biofuels in use and their widely different characteristics stress the need for improved characterisation of their chemical and physical properties. Industrial use of biofuels further demands rapid characterisation methods suitable for on-line measurements. The single most important property in biofuels is the calorific value. This is influenced by moisture and ash content as well as the chemical composition of the dry biomass. Near infrared (NIR) spectroscopy and bi-orthogonal partial least squares (BPLS) regression were used to model moisture and ash content as well as gross calorific value in ground samples of stem and branches wood. Samples from 16 individual trees of Norway spruce were artificially moistened into five classes (10, 20, 30, 40 and 50%). Three different models for decomposition of the spectral variation into structure and noise were applied. In total 16 BPLS models were used, all of which showed high accuracy in prediction for a test set and they explained 95.4-99.8% of the reference variable variation. The models for moisture content were spanned by the O-H and C-H overtones, i.e. between water and organic matter. The models for ash content appeared to be based on interactions in carbon chains. For calorific value the models was spanned by C-H stretching, by O-H stretching and bending and by combinations of O-H and C-O stretching. Also -C=C- bonds contributed in the prediction of calorific value. This study illustrates the possibility of using the NIR technique in combination with multivariate calibration to predict economically important properties of biofuels and to interpret models. This concept may also be applied for on-line prediction in processes to standardize biofuels or in biofuelled plants for process monitoring. PMID:16021218

  13. Greenland Scotland overflow studied by hydro-chemical multivariate analysis

    NASA Astrophysics Data System (ADS)

    Fogelqvist, E.; Blindheim, J.; Tanhua, T.; Østerhus, S.; Buch, E.; Rey, F.

    2003-01-01

    Hydrographic, nutrient and halocarbon tracer data collected in July-August 1994 in the Norwegian Sea, the Faroe Bank Channel (FBC), the Iceland and Irminger Basins and the Iceland Sea are presented. Special attention was given to the overflow waters over the Iceland-Scotland Ridge (ISOW). The Iceland-Scottland overflow water (ISOW) was identified along its pathway in the Iceland Basin, and entrainment of overlying water masses was quantified by multivariate analysis (MVA) using principal component analysis (PCA) and Partial Least Square (PLS) calibration. It was concluded that the deeper portion of the ISOW in the FBC was a mixture of about equal parts of Norwegian Sea Deep Water (NSDW) and Norwegian Sea Arctic Intermediate Water (NSAIW). The mixing development of ISOW during its descent in the Iceland Basin was analysed in three sections across the plume. In the southern section at 61°N, where the ISOW core was observed at 2300 m depth, the fraction of waters originating north of the ridge was assessed to be 54%. MVA assessed the fractional composition of the ISOW to be 21% NSDW, 22% NSAIW, 18% Northeast Atlantic Water (NEAW), 11% Modified East Icelandic Water, 25% Labrador Sea Water (LSW) and 3% North East Atlantic Deep Water. It may be noted that the fraction of NEAW is of the same volume as the NSDW. On its further path around the Reykjanes Ridge, the ISOW mixed mainly with LSW, and at 63°N in the Irminger Basin, it was warmer and fresher ( θ=2.8°C and S=34.92) than at 61°N east of the ridge (θ=2.37° C, S=34.97) . The most intensive mixing occurred immediately west of the FBC, probably due to high velocity of the overflow plume through the channel, where annual velocity means exceeded 1.1 m s -1. This resulted in shear instabilities towards the overlying Atlantic waters and cross-stream velocities exceeding 0.3 m s -1 in the bottom boundary layer. The role of NSAIW as a component of ISOW is increasing. Being largely a product of winter convection in the

  14. Calibration diagnostic and updating strategy based on quantitative modeling of near-infrared spectral residuals.

    PubMed

    Yu, Hua; Small, Gary W

    2015-02-01

    A diagnostic and updating strategy is explored for multivariate calibrations based on near-infrared spectroscopy. For use with calibration models derived from spectral fitting or decomposition techniques, the proposed method constructs models that relate the residual concentrations remaining after a prediction to the residual spectra remaining after the information associated with the calibration model has been extracted. This residual modeling approach is evaluated for use with partial least-squares (PLS) models for predicting physiological levels of glucose in a simulated biological matrix. Residual models are constructed with both PLS and a hybrid technique based on the use of PLS scores as inputs to support vector regression. Calibration and residual models are built with both absorbance and single-beam data collected over 416 days. Effective models for the spectral residuals are built with both types of data and demonstrate the ability to diagnose and correct deviations in performance of the calibration model with time. PMID:25473807

  15. Calibration of triaxial fluxgate gradiometer

    SciTech Connect

    Vcelak, Jan

    2006-04-15

    The description of simple and fast calibration procedures used for double-probe triaxial fluxgate gradiometer is provided in this paper. The calibration procedure consists of three basic steps. In the first step both probes are calibrated independently in order to reach constant total field reading in every position. Both probes are numerically aligned in the second step in order that the gradient reading is zero in homogenous magnetic field. The third step consists of periodic drift calibration during measurement. The results and detailed description of each calibration step are presented and discussed in the paper. The gradiometer is finally verified during the detection of the metal object in the measuring grid.

  16. Calibration effects on orbit determination

    NASA Technical Reports Server (NTRS)

    Madrid, G. A.; Winn, F. B.; Zielenbach, J. W.; Yip, K. B.

    1974-01-01

    The effects of charged particle and tropospheric calibrations on the orbit determination (OD) process are analyzed. The calibration process consisted of correcting the Doppler observables for the media effects. Calibrated and uncalibrated Doppler data sets were used to obtain OD results for past missions as well as Mariner Mars 1971. Comparisons of these Doppler reductions show the significance of the calibrations. For the MM'71 mission, the media calibrations proved themselves effective in diminishing the overall B-plane error and reducing the Doppler residual signatures.

  17. Primary calibration in acoustics metrology

    NASA Astrophysics Data System (ADS)

    Bacelar Milhomem, T. A.; Defilippo Soares, Z. M.

    2015-01-01

    SI unit in acoustics is realized by the reciprocity calibrations of laboratory standard microphones in pressure field, free field and diffuse field. Calibrations in pressure field and in free field are already consolidated and the Inmetro already done them. Calibration in diffuse field is not yet consolidated, however, some national metrology institutes, including Inmetro, are conducting researches on this subject. This paper presents the reciprocity calibration, the results of Inmetro in recent key comparisons and the research that is being developed for the implementation of reciprocity calibration in diffuse field.

  18. Calibration of a Spacecraft Gyro Quadruplet

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.; BarItzhack, Itzhack Y.; Bauer, Frank H. (Technical Monitor)

    2000-01-01

    This work presents a new approach to gyro calibration where, in addition to being used for computing attitude that is needed in the calibration process, the gyro outputs are also used as measurements in a Kalman filter. Gyro calibration as well as calibration of other instruments occurs in two steps. In the first step, the instrument error parameters are estimated. During the second stage, those errors are continuously removed from the gyro readings. In the classical approach to gyro calibration, the gyro outputs are used to maintain or compute body orientation rather than being used as measurements in the context of filtering. In inertial navigation, for example, gyro errors cause erroneous computation of velocity and position, and then when the latter are compared to measured velocity and position, a great portion of the computed velocity and position errors can be determined. The latter errors are then fed into a Kalman filter (KF) that uses the INS error model to infer on the gyro errors. Similarly, when applying the classical approach to spacecraft (SC) altitude determination, the gyro outputs are used to compute the attitude and then attitude measurements are used to determine the attitude errors, which again using a KF, indicates what the gyro errors are. In the approach adopted in this work the gyro outputs are used as angular rate measurements and are compared to estimated angular rate measurements. However, this approach requires the knowledge of the angular rate. In the past, the estimated angular rate was computed in a rather simplistic way assuming that the rate was constant. In the present work, the estimated angular rate is derived using a KF whose input can be any kind of attitude measurement, therefore the angular rate experienced by the SC can be continuously changing, and yet a good estimate of the rate, necessary for calibration, can be obtained.

  19. Mercury CEM Calibration

    SciTech Connect

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The

  20. Development of a partial least-squares calibration model for simultaneous determination of elements by inductively coupled plasma-atomic emission spectrometry.

    PubMed

    Chaloosi, Marzieh; Asadollahi, Seyed Azadeh; Khanchi, Ali Reza; FirozZare, Mahmoud; Mahani, Mohamad Khayatzadeh

    2009-01-01

    A partial least-squares (PLS) calibration model was developed for simultaneous multicomponent elemental analysis with inductively coupled plasma-atomic emission spectrometry (ICP-AES) in the presence of spectral interference. The best calibration model was obtained using a PLS2 algorithm. Validation was performed with an artificial test set. Multivariate calibration models were constructed using 2 series of synthetic mixtures (Zn, Cu, Fe, and U, V). Accuracy of the method was evaluated with unknown synthetic and real samples. PMID:19382589

  1. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  2. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  3. Automatic force balance calibration system

    NASA Astrophysics Data System (ADS)

    Ferris, Alice T.

    1995-05-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  4. Micromagnetometer calibration for accurate orientation estimation.

    PubMed

    Zhang, Zhi-Qiang; Yang, Guang-Zhong

    2015-02-01

    Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625

  5. [Laser-based radiometric calibration].

    PubMed

    Li, Zhi-gang; Zheng, Yu-quan

    2014-12-01

    Increasingly higher demands are put forward to spectral radiometric calibration accuracy and the development of new tunable laser based spectral radiometric calibration technology is promoted, along with the development of studies of terrestrial remote sensing, aeronautical and astronautical remote sensing, plasma physics, quantitative spectroscopy, etc. Internationally a number of national metrology scientific research institutes have built tunable laser based spectral radiometric calibration facilities in succession, which are traceable to cryogenic radiometers and have low uncertainties for spectral responsivity calibration and characterization of detectors and remote sensing instruments in the UK, the USA, Germany, etc. Among them, the facility for spectral irradiance and radiance responsivity calibrations using uniform sources (SIRCCUS) at the National Institute of Standards and Technology (NIST) in the USA and the Tunable Lasers in Photometry (TULIP) facility at the Physikalisch-Technische Bundesanstalt (PTB) in Germany have more representatives. Compared with lamp-monochromator systems, laser based spectral radiometric calibrations have many advantages, such as narrow spectral bandwidth, high wavelength accuracy, low calibration uncertainty and so on for radiometric calibration applications. In this paper, the development of laser-based spectral radiometric calibration and structures and performances of laser-based radiometric calibration facilities represented by the National Physical Laboratory (NPL) in the UK, NIST and PTB are presented, technical advantages of laser-based spectral radiometric calibration are analyzed, and applications of this technology are further discussed. Laser-based spectral radiometric calibration facilities can be widely used in important system-level radiometric calibration measurements with high accuracy, including radiance temperature, radiance and irradiance calibrations for space remote sensing instruments, and promote the

  6. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  7. SAR antenna calibration techniques

    NASA Technical Reports Server (NTRS)

    Carver, K. R.; Newell, A. C.

    1978-01-01

    Calibration of SAR antennas requires a measurement of gain, elevation and azimuth pattern shape, boresight error, cross-polarization levels, and phase vs. angle and frequency. For spaceborne SAR antennas of SEASAT size operating at C-band or higher, some of these measurements can become extremely difficult using conventional far-field antenna test ranges. Near-field scanning techniques offer an alternative approach and for C-band or X-band SARs, give much improved accuracy and precision as compared to that obtainable with a far-field approach.

  8. Structured light camera calibration

    NASA Astrophysics Data System (ADS)

    Garbat, P.; Skarbek, W.; Tomaszewski, M.

    2013-03-01

    Structured light camera which is being designed with the joined effort of Institute of Radioelectronics and Institute of Optoelectronics (both being large units of the Warsaw University of Technology within the Faculty of Electronics and Information Technology) combines various hardware and software contemporary technologies. In hardware it is integration of a high speed stripe projector and a stripe camera together with a standard high definition video camera. In software it is supported by sophisticated calibration techniques which enable development of advanced application such as real time 3D viewer of moving objects with the free viewpoint or 3D modeller for still objects.

  9. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  10. Compensator improvement for multivariable control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.

    1977-01-01

    A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.

  11. Multivariate Markov chain modeling for stock markets

    NASA Astrophysics Data System (ADS)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  12. Multivariable Harmonic Balance for Central Pattern Generators★

    PubMed Central

    Iwasaki, Tetsuya

    2009-01-01

    The central pattern generator (CPG) is a nonlinear oscillator formed by a group of neurons, providing a fundamental control mechanism underlying rhythmic movements in animal locomotion. We consider a class of CPGs modeled by a set of interconnected identical neurons. Based on the idea of multivariable harmonic balance, we show how the oscillation profile is related to the connectivity matrix that specifies the architecture and strengths of the interconnections. Specifically, the frequency, amplitudes, and phases are essentially encoded in terms of a pair of eigenvalue and eigenvector. This basic principle is used to estimate the oscillation profile of a given CPG model. Moreover, a systematic method is proposed for designing a CPG-based nonlinear oscillator that achieves a prescribed oscillation profile. PMID:19956774

  13. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates. PMID:26824760

  14. Consequences of Secondary Calibrations on Divergence Time Estimates

    PubMed Central

    Schenk, John J.

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates. PMID:26824760

  15. Fabrication and calibration of sensitively photoelastic biocompatible gelatin spheres

    NASA Astrophysics Data System (ADS)

    Fu, Henry; Ceniceros, Ericson; McCormick, Zephyr

    2013-11-01

    Photoelastic gelatin can be used to measure forces generated by organisms in complex environments. We describe manufacturing, storage, and calibration techniques for sensitive photoelastic gelatin spheres to be used in aqueous environments. Calibration yields a correlation between photoelastic signal and applied force to be used in future studies. Images for calibration were collected with a digital camera attached to a linear polariscope. The images were then processed in Matlab to determine the photoelastic response of each sphere. The effect of composition, gelatin concentration, glycerol concentration, sphere size, and temperature were all examined for their effect on signal response. The minimum detectable force and the repeatability of our calibration technique were evaluated for the same sphere, different spheres from the same fabrication batch, and spheres from different batches. The minimum force detectable is 10 μN or less depending on sphere size. Factors which significantly contribute to errors in the calibration were explored in detail and minimized.

  16. Binary Classifier Calibration Using a Bayesian Non-Parametric Approach

    PubMed Central

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2015-01-01

    Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in Data mining. This paper presents two new non-parametric methods for calibrating outputs of binary classification models: a method based on the Bayes optimal selection and a method based on the Bayesian model averaging. The advantage of these methods is that they are independent of the algorithm used to learn a predictive model, and they can be applied in a post-processing step, after the model is learned. This makes them applicable to a wide variety of machine learning models and methods. These calibration methods, as well as other methods, are tested on a variety of datasets in terms of both discrimination and calibration performance. The results show the methods either outperform or are comparable in performance to the state-of-the-art calibration methods. PMID:26613068

  17. Applied Stratigraphy

    NASA Astrophysics Data System (ADS)

    Lucas, Spencer G.

    Stratigraphy is a cornerstone of the Earth sciences. The study of layered rocks, especially their age determination and correlation, which are integral parts of stratigraphy, are key to fields as diverse as geoarchaeology and tectonics. In the Anglophile history of geology, in the early 1800s, the untutored English surveyor William Smith was the first practical stratigrapher, constructing a geological map of England based on his own applied stratigraphy. Smith has, thus, been seen as the first “industrial stratigrapher,” and practical applications of stratigraphy have since been essential to most of the extractive industries from mining to petroleum. Indeed, gasoline is in your automobile because of a tremendous use of applied stratigraphy in oil exploration, especially during the latter half of the twentieth century. Applied stratigraphy, thus, is a subject of broad interest to Earth scientists.

  18. Multivariate singular spectrum analysis and the road to phase synchronization

    NASA Astrophysics Data System (ADS)

    Groth, Andreas; Ghil, Michael

    2010-05-01

    Singular spectrum analysis (SSA) and multivariate SSA (M-SSA) are based on the classical work of Kosambi (1943), Loeve (1945) and Karhunen (1946) and are closely related to principal component analysis. They have been introduced into information theory by Bertero, Pike and co-workers (1982, 1984) and into dynamical systems analysis by Broomhead and King (1986a,b). Ghil, Vautard and associates have applied SSA and M-SSA to the temporal and spatio-temporal analysis of short and noisy time series in climate dynamics and other fields in the geosciences since the late 1980s. M-SSA provides insight into the unknown or partially known dynamics of the underlying system by decomposing the delay-coordinate phase space of a given multivariate time series into a set of data-adaptive orthonormal components. These components can be classified essentially into trends, oscillatory patterns and noise, and allow one to reconstruct a robust "skeleton" of the dynamical system's structure. For an overview we refer to Ghil et al. (Rev. Geophys., 2002). In this talk, we present M-SSA in the context of synchronization analysis and illustrate its ability to unveil information about the mechanisms behind the adjustment of rhythms in coupled dynamical systems. The focus of the talk is on the special case of phase synchronization between coupled chaotic oscillators (Rosenblum et al., PRL, 1996). Several ways of measuring phase synchronization are in use, and the robust definition of a reasonable phase for each oscillator is critical in each of them. We illustrate here the advantages of M-SSA in the automatic identification of oscillatory modes and in drawing conclusions about the transition to phase synchronization. Without using any a priori definition of a suitable phase, we show that M-SSA is able to detect phase synchronization in a chain of coupled chaotic oscillators (Osipov et al., PRE, 1996). Recently, Muller et al. (PRE, 2005) and Allefeld et al. (Intl. J. Bif. Chaos, 2007) have

  19. Principal Component Noise Filtering for NAST-I Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Smith, William L., Sr.

    2011-01-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.

  20. Aspects of model selection in multivariate analyses

    SciTech Connect

    Picard, R.

    1982-01-01

    Analysis of data sets that involve large numbers of variables usually entails some type of model fitting and data reduction. In regression problems, a fitted model that is obtained by a selection process can be difficult to evaluate because of optimism induced by the choice mechanism. Problems in areas such as discriminant analysis, calibration, and the like often lead to similar difficulties. The preceeding sections reviewed some of the general ideas behind assessment of regression-type predictors and illustrated how they can be easily incorporated into a standard data analysis.