Multivariate calibration applied to the quantitative analysis of infrared spectra
Haaland, D.M.
1991-01-01
Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.
Multivariate Regression with Calibration*
Liu, Han; Wang, Lie; Zhao, Tuo
2014-01-01
We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861
Savescu, Roxana Florenta; Laba, Marian
2016-06-01
This paper highlights the statistical methodology used in a dissection experiment carried out in Romania to calibrate and standardize two classification devices, OptiGrade PRO (OGP) and Fat-o-Meat'er (FOM). One hundred forty-five carcasses were measured using the two probes and dissected according to the European reference method. To derive prediction formulas for each device, multiple linear regression analysis was performed on the relationship between the reference lean meat percentage and the back fat and muscle thicknesses, using the ordinary least squares technique. The root mean squared error of prediction calculated using the leave-one-out cross validation met European Commission (EC) requirements. The application of the new prediction equations reduced the gap between the lean meat percentage measured with the OGP and FOM from 2.43% (average for the period Q3/2006-Q2/2008) to 0.10% (average for the period Q3/2008-Q4/2014), providing the basis for a fair payment system for the pig producers. PMID:26835835
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Adaptable Multivariate Calibration Models for Spectral Applications
THOMAS,EDWARD V.
1999-12-20
Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.
Exploration of new multivariate spectral calibration algorithms.
Van Benthem, Mark Hilary; Haaland, David Michael; Melgaard, David Kennett; Martin, Laura Elizabeth; Wehlburg, Christine Marie; Pell, Randy J.; Guenard, Robert D.
2004-03-01
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
Daszykowski, M; Wrobel, M S; Czarnik-Matusewicz, H; Walczak, B
2008-11-01
Near-infrared reflectance spectroscopy (NIRS) is often applied when a rapid quantification of major components in feed is required. This technique is preferred over the other analytical techniques due to the relatively few requirements concerning sample preparations, high efficiency and low costs of the analysis. In this study, NIRS was used to control the content of crude protein, fat and fibre in extracted rapeseed meal which was produced in the local industrial crushing plant. For modelling the NIR data, the partial least squares approach (PLS) was used. The satisfactory prediction errors were equal to 1.12, 0.13 and 0.45 (expressed in percentages referring to dry mass) for crude protein, fat and fibre content, respectively. To point out the key spectral regions which are important for modelling, uninformative variable elimination PLS, PLS with jackknife-based variable elimination, PLS with bootstrap-based variable elimination and the orthogonal partial least squares approach were compared for the data studied. They enabled an easier interpretation of the calibration models in terms of absorption bands and led to similar predictions for test samples compared to the initial models.
Insights into multivariate calibration using errors-in-variables modeling
Thomas, E.V.
1996-09-01
A {ital q}-vector of responses, y, is related to a {ital p}-vector of explanatory variables, x, through a causal linear model. In analytical chemistry, y and x might represent the spectrum and associated set of constituent concentrations of a multicomponent sample which are related through Beer`s law. The model parameters are estimated during a calibration process in which both x and y are available for a number of observations (samples/specimens) which are collectively referred to as the calibration set. For new observations, the fitted calibration model is then used as the basis for predicting the unknown values of the new x`s (concentrations) form the associated new y`s (spectra) in the prediction set. This prediction procedure can be viewed as parameter estimation in an errors-in-variables (EIV) framework. In addition to providing a basis for simultaneous inference about the new x`s, consideration of the EIV framework yields a number of insights relating to the design and execution of calibration studies. A particularly interesting result is that predictions of the new x`s for individual samples can be improved by using seemingly unrelated information contained in the y`s from the other members of the prediction set. Furthermore, motivated by this EIV analysis, this result can be extended beyond the causal modeling context to a broader range of applications of multivariate calibration which involve the use of principal components regression.
Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration
Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu
2016-01-01
One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636
Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration.
Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu
2016-01-01
One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20-200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636
Generalized error-dependent prediction uncertainty in multivariate calibration.
Allegrini, Franco; Wentzell, Peter D; Olivieri, Alejandro C
2016-01-15
Most of the current expressions used to calculate figures of merit in multivariate calibration have been derived assuming independent and identically distributed (iid) measurement errors. However, it is well known that this condition is not always valid for real data sets, where the existence of many external factors can lead to correlated and/or heteroscedastic noise structures. In this report, the influence of the deviations from the classical iid paradigm is analyzed in the context of error propagation theory. New expressions have been derived to calculate sample dependent prediction standard errors under different scenarios. These expressions allow for a quantitative study of the influence of the different sources of instrumental error affecting the system under analysis. Significant differences are observed when the prediction error is estimated in each of the studied scenarios using the most popular first-order multivariate algorithms, under both simulated and experimental conditions.
Dinç, Erdal; Kanbur, Murat
2002-05-15
Four multivariate calibration-prediction techniques, classical least-squares, inverse least-squares, principal component regression and partial least-squares regression were applied to the spectrophotometric multicomponent analysis of a veterinary formulation containing oxfendazole (OXF) and oxyclozanide (OXC) without any separation step. The multivariate calibrations were constructed by measuring the absorbance values at 14 points in the 285-350 nm wavelength range and by using the training set of standard mixtures containing OXF and OXC in the different compositions. The validity of building multivariate calibrations was checked by using the synthetic mixtures of both drugs. The multivariate calibration models were successfully applied to the spectrophotometric determination of OXF and OXC in laboratory prepared mixtures and a veterinary formulation. The results obtained were statistically compared with each other.
Comparison of multivariate calibration methods for quantitative spectral analysis
Thomas, E.V.; Haaland, D.M. )
1990-05-15
The quantitative prediction abilities of four multivariate calibration methods for spectral analyses are compared by using extensive Monte Carlo simulations. The calibration methods compared include inverse least-squares (ILS), classical least-squares (CLS), partial least-squares (PLS), and principal component regression (PCR) methods. ILS is a frequency-limited method while the latter three are capable of full-spectrum calibration. The simulations were performed assuming Beer's law holds and that spectral measurement errors and concentration errors associated with the reference method are normally distributed. Eight different factors that could affect the relative performance of the calibration methods were varied in a two-level, eight-factor experimental design in order to evaluate their effect on the prediction abilities of the four methods. It is found that each of the three full-spectrum methods has its range of superior performance. The frequency-limited ILS method was never the best method, although in the presence of relatively large concentration errors it sometimes yields comparable analysis precision to the full-spectrum methods for the major spectral component. The importance of each factor in the absolute and relative performances of the four methods is compared.
Multivariate analysis applied to tomato hybrid production.
Balasch, S; Nuez, F; Palomares, G; Cuartero, J
1984-11-01
Twenty characters were measured on 60 tomato varieties cultivated in the open-air and in polyethylene plastic-house. Data were analyzed by means of principal components, factorial discriminant methods, Mahalanobis D(2) distances and principal coordinate techniques. Factorial discriminant and Mahalanobis D(2) distances methods, both of which require collecting data plant by plant, lead to similar conclusions as the principal components method that only requires taking data by plots. Characters that make up the principal components in both environments studied are the same, although the relative importance of each one of them varies within the principal components. By combining information supplied by multivariate analysis with the inheritance mode of characters, crossings among cultivars can be experimented with that will produce heterotic hybrids showing characters within previously established limits.
Ultrasonic sensor for predicting sugar concentration using multivariate calibration.
Krause, D; Hussein, W B; Hussein, M A; Becker, T
2014-08-01
This paper presents a multivariate regression method for the prediction of maltose concentration in aqueous solutions. For this purpose, time and frequency domain of ultrasonic signals are analyzed. It is shown, that the prediction of concentration at different temperatures is possible by using several multivariate regression models for individual temperature points. Combining these models by a linear approximation of each coefficient over temperature results in a unified solution, which takes temperature effects into account. The benefit of the proposed method is the low processing time required for analyzing online signals as well as the non-invasive sensor setup which can be used in pipelines. Also the ultrasonic signal sections used in the presented investigation were extracted out of buffer reflections which remain primarily unaffected by bubble and particle interferences. Model calibration was performed in order to investigate the feasibility of online monitoring in fermentation processes. The temperature range investigated was from 10 °C to 21 °C. This range fits to fermentation processes used in the brewing industry. This paper describes the processing of ultrasonic signals for regression, the model evaluation as well as the input variable selection. The statistical approach used for creating the final prediction solution was partial least squares (PLS) regression validated by cross validation. The overall minimum root mean squared error achieved was 0.64 g/100 g.
CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.
1999-10-01
The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly
A Bayesian, multivariate calibration for Globigerinoides ruber Mg/Ca
NASA Astrophysics Data System (ADS)
Khider, D.; Huerta, G.; Jackson, C.; Stott, L. D.; Emile-Geay, J.
2015-09-01
The use of Mg/Ca in marine carbonates as a paleothermometer has been challenged by observations that implicate salinity as a contributing influence on Mg incorporation into biotic calcite and that dissolution at the sea-floor alters the original Mg/Ca. Yet, these factors have not yet been incorporated into a single calibration model. We introduce a new Bayesian calibration for Globigerinoides ruber Mg/Ca based on 186 globally distributed core top samples, which explicitly takes into account the effect of temperature, salinity, and dissolution on this proxy. Our reported temperature, salinity, and dissolution (here expressed as deep-water ΔCO32-) sensitivities are (±2σ) 8.7±0.9%/°C, 3.9±1.2%/psu, and 3.3±1.3%/μmol.kg-1 below a critical threshold of 21 μmol/kg in good agreement with previous culturing and core-top studies. We then perform a sensitivity experiment on a published record from the western tropical Pacific to investigate the bias introduced by these secondary influences on the interpretation of past temperature variability. This experiment highlights the potential for misinterpretations of past oceanographic changes when the secondary influences of salinity and dissolution are not accounted for. Multiproxy approaches could potentially help deconvolve the contributing influences but this awaits better characterization of the spatio-temporal relationship between salinity and δ18Osw over millennial and orbital timescales.
Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana
2016-04-01
Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit. PMID:26910793
Poláček, Roman; Májek, Pavel; Hroboňová, Katarína; Sádecká, Jana
2016-04-01
Fluoxetine is the most prescribed antidepressant chiral drug worldwide. Its enantiomers have a different duration of serotonin inhibition. A novel simple and rapid method for determination of the enantiomeric composition of fluoxetine in pharmaceutical pills is presented. Specifically, emission, excitation, and synchronous fluorescence techniques were employed to obtain the spectral data, which with multivariate calibration methods, namely, principal component regression (PCR) and partial least square (PLS), were investigated. The chiral recognition of fluoxetine enantiomers in the presence of β-cyclodextrin was based on diastereomeric complexes. The results of the multivariate calibration modeling indicated good prediction abilities. The obtained results for tablets were compared with those from chiral HPLC and no significant differences are shown by Fisher's (F) test and Student's t-test. The smallest residuals between reference or nominal values and predicted values were achieved by multivariate calibration of synchronous fluorescence spectral data. This conclusion is supported by calculated values of the figure of merit.
Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration
Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.
2009-09-01
Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for the peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.
Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy
Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.
2013-03-01
NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.
Dai, J; Yaylayan, V A; Raghavan, G S; Parè, J R; Liu, Z
2001-03-01
Two-component and multivariate calibration techniques were developed for the simultaneous quantification of total azadirachtin-related limonoids (AZRL) and simple terpenoids (ST) in neem extracts using vanillin assay. A mathematical modeling method was also developed to aid in the analysis of the spectra and to simplify the calculations. The mathematical models were used in a two-component calibration (using azadirachtin and limonene as standards) for samples containing mainly limonoids and terpenoids (such as neem seed kernel extracts). However, for the extracts from other parts of neem, such as neem leaf, a multivariate calibration was necessary to eliminate the possible interference from phenolics and other components in order to obtain the accurate content of AZRL and ST. It was demonstrated that the accuracy of the vanillin assay in predicting the content of azadirachtin in a model mixture containing limonene (25% w/w) can be improved from 50% overestimation to 95% accuracy using the two-component calibration, while predicting the content of limonene with 98% accuracy. Both calibration techniques were applied to estimate the content of AZRL and ST in different parts of the neem plant. The results of this study indicated that the relative content of limonoids was much higher than that of the terpenoids in all parts of the neem plant studied. PMID:11312830
Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration.
Xu, Lu; Zhou, Yan-Ping; Tang, Li-Juan; Wu, Hai-Long; Jiang, Jian-Hui; Shen, Guo-Li; Yu, Ru-Qin
2008-06-01
Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method.
Wolfrum, E. J.; Sluiter, A. D.
2009-01-01
We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.
Kay, D; McDonald, A
1983-01-01
This paper reports on the calibration and use of a multiple regression model designed to predict concentrations of Escherichia coli and total coliforms in two upland British impoundments. The multivariate approach has improved predictive capability over previous univariate linear models because it includes predictor variables for the timing and magnitude of hydrological input to the reservoirs and physiochemical parameters of water quality. The significance of these results for catchment management research is considered. PMID:6639016
Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel
2013-03-15
An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. PMID:23598144
Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel
2013-03-15
An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills.
Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul
2015-01-01
Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934
Darwish, Hany W; Backeit, Ahmed H
2013-01-01
Olmesartan medoxamil (OLM, an angiotensin II receptor blocker) and amlodipine besylate (AML, a dihydropyridine calcium channel blocker), are co-formulated in a single-dose combination for the treatment of hypertensive patients whose blood pressure is not adequately controlled on either component monotherapy. In this work, four multivariate and two univariate calibration methods were applied for simultaneous spectrofluorimetric determination of OLM and AML in their combined pharmaceutical tablets in all ratios approved by FDA. The four multivariate methods are partial least squares (PLS), genetic algorithm PLS (GA-PLS), principal component ANN (PC-ANN) and GA-ANN. The two proposed univariate calibration methods are, direct spectrofluorimetric method for OLM and isoabsorpitive method for determination of total concentration of OLM and AML and hence AML by subtraction. The results showed the superiority of multivariate calibration methods over univariate ones for the analysis of the binary mixture. The optimum assay conditions were established and the proposed multivariate calibration methods were successfully applied for the assay of the two drugs in validation set and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives. The results were favorably compared with those obtained by a reference spectrophotometric method.
Efficient computation of net analyte signal vector in inverse multivariate calibration models.
Faber, N K
1998-12-01
The net analyte signal vector has been defined by Lorber as the part of a mixture spectrum that is unique for the analyte of interest; i.e., it is orthogonal to the spectra of the interferences. It plays a key role in the development of multivariate analytical figures of merit. Applications have been reported that imply its utility for spectroscopic wavelength selection as well as calibration method comparison. Currently available methods for computing the net analyte signal vector in inverse multivariate calibration models are based on the evaluation of projection matrices. Due to the size of these matrices (p × p, with p the number of wavelengths) the computation may be highly memory- and time-consuming. This paper shows that the net analyte signal vector can be obtained in a highly efficient manner by a suitable scaling of the regression vector. Computing the scaling factor only requires the evaluation of an inner product (p multiplications and additions). The mathematical form of the newly derived expression is discussed, and the generalization to multiway calibration models is briefly outlined.
Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration
NASA Astrophysics Data System (ADS)
Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.
2016-03-01
An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.
Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.
Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J
2016-03-15
An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.
Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.
Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J
2016-03-15
An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies. PMID:26771246
Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino
2003-01-01
A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Ferrão, Marco Flores; da Silva, Fabiana Ernestina Barcellos; Müller, Edson Irineu; Flores, Erico Marlon de Moraes
2012-06-01
A method for simultaneous determination of clavulanic acid (CA) and amoxicillin (AMO) in commercial tablets was developed using diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and multivariate calibration. Twenty-five samples (10 commercial and 15 synthetic) were used as a calibration set and 15 samples (10 commercial and 5 synthetic) were used for a prediction set. Calibration models were developed using partial least squares (PLS), interval PLS (iPLS), and synergy interval PLS (siPLS) algorithms. The best algorithm for CA determination was siPLS model with spectra divided in 30 intervals and combinations of 2 intervals. This model showed a root mean square error of prediction (RMSEP) of 5.1 mg g(-1). For AMO determination, the best siPLS model was obtained with spectra divided in 10 intervals and combinations of 4 intervals. This model showed a RMSEP of 22.3 mg g(-1). The proposed method was considered as a suitable for the simultaneous determination of CA and AMO in commercial pharmaceuticals products.
NASA Technical Reports Server (NTRS)
Liberty, S. R.; Mielke, R. R.; Tung, L. J.
1981-01-01
Applied research in the area of spectral assignment in multivariable systems is reported. A frequency domain technique for determining the set of all stabilizing controllers for a single feedback loop multivariable system is described. It is shown that decoupling and tracking are achievable using this procedure. The technique is illustrated with a simple example.
Tan, Chao; Wang, Jinyue; Wu, Tong; Qin, Xin; Li, Menglong
2010-12-01
Based on the combination of uninformative variable elimination (UVE), bootstrap and mutual information (MI), a simple ensemble algorithm, named ESPLS, is proposed for spectral multivariate calibration (MVC). In ESPLS, those uninformative variables are first removed; and then a preparatory training set is produced by bootstrap, on which a MI spectrum of retained variables is calculated. The variables that exhibit higher MI than a defined threshold form a subspace on which a candidate partial least-squares (PLS) model is constructed. This process is repeated. After a number of candidate models are obtained, a small part of models is picked out to construct an ensemble model by simple/weighted average. Four near/mid-infrared (NIR/MIR) spectral datasets concerning the determination of six components are used to verify the proposed ESPLS. The results indicate that ESPLS is superior to UVEPLS and its combination with MI-based variable selection (SPLS) in terms of both the accuracy and robustness. Besides, from the perspective of end-users, ESPLS does not increase the complexity of a calibration when enhancing its performance.
Lemes, Maykon A; Godinho, Mariana S; Rabelo, Denilson; Martins, Felipe T; Mesquita, Alexandre; Neto, Francisco N De Souza; Araujo, Olacir A; Oliveira, Anselmo E De
2014-01-01
Powder X-ray diffraction patterns for 29 samples of magnetite, acquired using a conventional diffractometer, were used to build PLS calibration-based methods and variable selection to estimate mean crystallite size of magnetite directly from powder X-ray diffraction patterns. The best IPLS model corresponds to the Bragg reflections at 35.4° (h k l = 3 1 1), 43.0° (h k l = 4 0 0), 53.6° (h k l = 4 2 2), and 57.0° (h k l = 5 1 1) in 2θ. The best model was a GA-PLS which produced a model with RMSEP of 0.9 nm, and a correlation coefficient of 0.9976 between mean crystallite sizes calculated using Williamson-Hall approach and the ones predicted by GA-PLS method. These results indicate that magnetite mean crystallite sizes can be predicted directly from Powder X-Ray Diffraction and multivariate calibration using PLS variable selection approach.
Tonello, Natalia; Moressi, Marcela Beatriz; Robledo, Sebastián Noel; D'Eramo, Fabiana; Marioli, Juan Miguel
2016-09-01
The simultaneous determination of eugenol (EU), thymol (Ty) and carvacrol (CA) in honey samples, employing square wave voltammetry (SWV) and chemometrics tools, is informed for the first time. For this purpose, a glassy carbon electrode (GCE) was used as working electrode. The operating conditions and influencing parameters (involving several chemical and instrumental parameters) were first optimized by cyclic voltammetry (CV). Thus, the effects of the scan rate, pH and analyte concentration on the electrochemical response of the above mentioned molecules were studied. The results show that the electrochemical responses of the three compounds are very similar and that the voltammetric traces present a high degree of overlap under all the experimental conditions used in this study. Therefore, two chemometric tools were tested to obtain the multivariate calibration model. One method was the partial least squares regression (PLS-1), which assumes a linear behaviour. The other nonlinear method was an artificial neural network (ANN). In this last case we used a supervised, feed-forward network with Levenberg-Marquardt back propagation training. From the accuracies and precisions analysis between nominal and estimated concentrations calculated by using both methods, it was inferred that the ANN method was a good model to quantify EU, Ty and CA in honey samples. Recovery percentages were between 87% and 104%, except for two samples whose values were 136% and 72%. The analytical methodology was simple, fast and accurate.
Yun, Yong-Huan; Wang, Wei-Ting; Tan, Min-Li; Liang, Yi-Zeng; Li, Hong-Dong; Cao, Dong-Sheng; Lu, Hong-Mei; Xu, Qing-Song
2014-01-01
Nowadays, with a high dimensionality of dataset, it faces a great challenge in the creation of effective methods which can select an optimal variables subset. In this study, a strategy that considers the possible interaction effect among variables through random combinations was proposed, called iteratively retaining informative variables (IRIV). Moreover, the variables are classified into four categories as strongly informative, weakly informative, uninformative and interfering variables. On this basis, IRIV retains both the strongly and weakly informative variables in every iterative round until no uninformative and interfering variables exist. Three datasets were employed to investigate the performance of IRIV coupled with partial least squares (PLS). The results show that IRIV is a good alternative for variable selection strategy when compared with three outstanding and frequently used variable selection methods such as genetic algorithm-PLS, Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) and competitive adaptive reweighted sampling (CARS). The MATLAB source code of IRIV can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list. PMID:24356218
Tonello, Natalia; Moressi, Marcela Beatriz; Robledo, Sebastián Noel; D'Eramo, Fabiana; Marioli, Juan Miguel
2016-09-01
The simultaneous determination of eugenol (EU), thymol (Ty) and carvacrol (CA) in honey samples, employing square wave voltammetry (SWV) and chemometrics tools, is informed for the first time. For this purpose, a glassy carbon electrode (GCE) was used as working electrode. The operating conditions and influencing parameters (involving several chemical and instrumental parameters) were first optimized by cyclic voltammetry (CV). Thus, the effects of the scan rate, pH and analyte concentration on the electrochemical response of the above mentioned molecules were studied. The results show that the electrochemical responses of the three compounds are very similar and that the voltammetric traces present a high degree of overlap under all the experimental conditions used in this study. Therefore, two chemometric tools were tested to obtain the multivariate calibration model. One method was the partial least squares regression (PLS-1), which assumes a linear behaviour. The other nonlinear method was an artificial neural network (ANN). In this last case we used a supervised, feed-forward network with Levenberg-Marquardt back propagation training. From the accuracies and precisions analysis between nominal and estimated concentrations calculated by using both methods, it was inferred that the ANN method was a good model to quantify EU, Ty and CA in honey samples. Recovery percentages were between 87% and 104%, except for two samples whose values were 136% and 72%. The analytical methodology was simple, fast and accurate. PMID:27343610
Yun, Yong-Huan; Wang, Wei-Ting; Tan, Min-Li; Liang, Yi-Zeng; Li, Hong-Dong; Cao, Dong-Sheng; Lu, Hong-Mei; Xu, Qing-Song
2014-01-01
Nowadays, with a high dimensionality of dataset, it faces a great challenge in the creation of effective methods which can select an optimal variables subset. In this study, a strategy that considers the possible interaction effect among variables through random combinations was proposed, called iteratively retaining informative variables (IRIV). Moreover, the variables are classified into four categories as strongly informative, weakly informative, uninformative and interfering variables. On this basis, IRIV retains both the strongly and weakly informative variables in every iterative round until no uninformative and interfering variables exist. Three datasets were employed to investigate the performance of IRIV coupled with partial least squares (PLS). The results show that IRIV is a good alternative for variable selection strategy when compared with three outstanding and frequently used variable selection methods such as genetic algorithm-PLS, Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) and competitive adaptive reweighted sampling (CARS). The MATLAB source code of IRIV can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.
Cantarelli, Miguel A; Funes, Israel G; Marchevsky, Eduardo J; Camiña, José M
2009-12-15
A method for the determination of oleic acid in sunflower seeds is proposed. One hundred samples of sunflower seeds were analyzed by near-infrared diffuse reflectance spectroscopy (NIRDRS). The direct measures were realized in ground and sifted seeds. The PLS multivariate calibration model was obtained using first derivate absorbance values as response matrix, while the oleic acid concentration matrix was obtained analyzing the seed samples by gas chromatography with a flame ionization detector (GC-FID). The NIRDRS-PLS model was validated externally using unknown samples of sunflower seeds. The accuracy and precision of the method was evaluated using GC as reference method. The following figures of merit (FOM) were obtained: LOD=3.4% (w/w); LOQ=11.3% (w/w); SEN=8x10(-5); SEL=0.15; analytical sensibility (gamma)=1.5 and linear range (LR)=18.1-89.2% (w/w). This method is useful for the fast determination of oleic acid in sunflower seeds and for quality control of raw materials. PMID:19836509
Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM
ERIC Educational Resources Information Center
Warner, Rebecca M.
2007-01-01
This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…
Multivariate calibration modeling of liver oxygen saturation using near-infrared spectroscopy
NASA Astrophysics Data System (ADS)
Cingo, Ndumiso A.; Soller, Babs R.; Puyana, Juan C.
2000-05-01
The liver has been identified as an ideal site to spectroscopically monitor for changes in oxygen saturation during liver transplantation and shock because it is susceptible to reduced blood flow and oxygen transport. Near-IR spectroscopy, combined with multivariate calibration techniques, has been shown to be a viable technique for monitoring oxygen saturation changes in various organs in a minimally invasive manner. The liver has a dual system circulation. Blood enters the liver through the portal vein and hepatic artery, and leaves through the hepatic vein. Therefore, it is of utmost importance to determine how the liver NIR spectroscopic information correlates with the different regions of the hepatic lobule as the dual circulation flows from the presinusoidal space into the post sinusoidal region of the central vein. For NIR spectroscopic information to reliably represent the status of liver oxygenation, the NIR oxygen saturation should best correlate with the post-sinusoidal region. In a series of six pigs undergoing induced hemorrhagic chock, NIR spectra collected from the liver were used together with oxygen saturation reference data from the hepatic and portal veins, and an average of the two to build partial least-squares regression models. Results obtained from these models show that the hepatic vein and an average of the hepatic and portal veins provide information that is best correlate with NIR spectral information, while the portal vein reference measurement provides poorer correlation and accuracy. These results indicate that NIR determination of oxygen saturation in the liver can provide an assessment of liver oxygen utilization.
NASA Astrophysics Data System (ADS)
Samadi-Maybodi, Abdolraouf; Hassani Nejad-Darzi, Seyed Karim
2010-04-01
Resolution of binary mixtures of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable, principal component regression and hybrid linear analysis. Data of analysis were obtained from UV-vis spectra of the above compounds. The method of central composite design was used in the ranges of 1-15 mg L -1 for both calibration and validation sets. The models refinement procedure and their validation were performed by cross-validation. Figures of merit such as selectivity, sensitivity, analytical sensitivity and limit of detection were determined for all three compounds. The procedure was successfully applied to simultaneous determination of the above compounds in pharmaceutical tablets.
Samadi-Maybodi, Abdolraouf; Hassani Nejad-Darzi, Seyed Karim
2010-04-01
Resolution of binary mixtures of paracetamol, phenylephrine hydrochloride and chlorpheniramine maleate with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable, principal component regression and hybrid linear analysis. Data of analysis were obtained from UV-vis spectra of the above compounds. The method of central composite design was used in the ranges of 1-15 mg L(-1) for both calibration and validation sets. The models refinement procedure and their validation were performed by cross-validation. Figures of merit such as selectivity, sensitivity, analytical sensitivity and limit of detection were determined for all three compounds. The procedure was successfully applied to simultaneous determination of the above compounds in pharmaceutical tablets.
Dieterle, F; Nopper, D; Gauglitz, G
2001-07-01
This paper presents several methods for analysis of data from reflectometric interference spectroscopic measurements (RIfS) of water samples. The set-up consists of three sensors with different polymer layers. Mixtures of butanol and ethanol in water were measured from 0 to 12,000 ppm each. The data space was characterized by principal component analysis (PCA). Calibration and prediction were achieved by multivariate methods, e.g. multiple linear regression (MLR), partial least squares (PLS) with additional predictors, and quadratic partial least squares (Q-PLS), and by use of artificial neural networks. Artificial neural networks gave the best results of all the calibration methods used. Calibration and prediction of the concentration of the two analytes by artificial neural nets were robust and the set-up could be reduced to only two sensors without deterioration of the prediction.
Balss, Karin M; Long, Frederick H; Veselov, Vladimir; Orana, Argjenta; Akerman-Revis, Eugena; Papandreou, George; Maryanoff, Cynthia A
2008-07-01
Multivariate data analysis was applied to confocal Raman measurements on stents coated with the polymers and drug used in the CYPHER Sirolimus-eluting Coronary Stents. Partial least-squares (PLS) regression was used to establish three independent calibration curves for the coating constituents: sirolimus, poly(n-butyl methacrylate) [PBMA], and poly(ethylene-co-vinyl acetate) [PEVA]. The PLS calibrations were based on average spectra generated from each spatial location profiled. The PLS models were tested on six unknown stent samples to assess accuracy and precision. The wt % difference between PLS predictions and laboratory assay values for sirolimus was less than 1 wt % for the composite of the six unknowns, while the polymer models were estimated to be less than 0.5 wt % difference for the combined samples. The linearity and specificity of the three PLS models were also demonstrated with the three PLS models. In contrast to earlier univariate models, the PLS models achieved mass balance with better accuracy. This analysis was extended to evaluate the spatial distribution of the three constituents. Quantitative bitmap images of drug-eluting stent coatings are presented for the first time to assess the local distribution of components. PMID:18510342
Collado, M S; Mantovani, V E; Goicoechea, H C; Olivieri, A C
2000-08-16
The use of multivariate spectrophotometric calibration for the simultaneous determination of several active components and excipients in ophthalmic solutions is presented. The resolution of five-component mixtures of phenylephrine, chloramphenicol, antipyrine, methylparaben and thimerosal has been accomplished by using partial least-squares (PLS-1) and a variant of the so-called hybrid linear analysis (HLA). Notwithstanding the presence of a large number of components and their high degree of spectral overlap, they have been determined simultaneously with high accuracy and precision, with no interference, rapidly and without resorting to extraction procedures using non aqueous solvents. A simple and fast method for wavelength selection in the calibration step is presented, based on the minimisation of the predicted error sum of squares (PRESS) calculated as a function of a moving spectral window.
Collado, M S; Robles, J C; De Zan, M; Cámara, M S; Mantovani, V E; Goicoechea, H C
2001-10-23
The use of multivariate spectrophotometric calibration for the simultaneous determination of dexamethasone and two typical excipients (creatinine and propylparaben) in injections is presented. The resolution of the three-component mixture in a matrix of excipients has been accomplished by using partial least-squares (PLS-1). Notwithstanding the elevated degree of spectral overlap, they have been rapidly and simultaneously determined with high accuracy and precision (comparable to the HPLC pharmacopeial method), with no interference, and without resorting to extraction procedures using non-aqueous solvents. A simple and fast method for wavelength selection in the calibration step is used, based on the minimisation of the predicted error sum of squares (PRESS) calculated as a function of a moving spectral window.
Applying the multivariate time-rescaling theorem to neural population models
Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon
2011-01-01
Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436
A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil
NASA Technical Reports Server (NTRS)
Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)
2000-01-01
The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.
Sharma, Sandeep; Goodarzi, Mohammad; Ramon, Herman; Saeys, Wouter
2014-04-01
Partial Least Squares (PLS) regression is one of the most used methods for extracting chemical information from Near Infrared (NIR) spectroscopic measurements. The success of a PLS calibration relies largely on the representativeness of the calibration data set. This is not trivial, because not only the expected variation in the analyte of interest, but also the variation of other contributing factors (interferents) should be included in the calibration data. This also implies that changes in interferent concentrations not covered in the calibration step can deteriorate the prediction ability of the calibration model. Several researchers have suggested that PLS models can be robustified against changes in the interferent structure by incorporating expert knowledge in the preprocessing step with the aim to efficiently filter out the spectral influence of the spectral interferents. However, these methods have not yet been compared against each other. Therefore, in the present study, various preprocessing techniques exploiting expert knowledge were compared on two experimental data sets. In both data sets, the calibration and test set were designed to have a different interferent concentration range. The performance of these techniques was compared to that of preprocessing techniques which do not use any expert knowledge. Using expert knowledge was found to improve the prediction performance for both data sets. For data set-1, the prediction error improved nearly 32% when pure component spectra of the analyte and the interferents were used in the Extended Multiplicative Signal Correction framework. Similarly, for data set-2, nearly 63% improvement in the prediction error was observed when the interferent information was utilized in Spectral Interferent Subtraction preprocessing.
NASA Astrophysics Data System (ADS)
Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen
2014-10-01
To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower ± longan ± buckwheat ± rape) model were achieved as follow: RMSEP = 0.0235 and R = 0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration.
NASA Astrophysics Data System (ADS)
Saleem, Aamer; Canal, Céline; Hutchins, David A.; Green, Roger J.
2011-05-01
The detection of specific chemicals when concealed behind a layer of clothing is reported using near infrared (NIR) spectroscopy. Concealment modifies the spectrum of a particular chemical when recorded at stand-off ranges of three meters in a diffuse reflection experiment. The subsequent analysis to identify a particular chemical has involved employing calibration models such as principal component regression (PCR) and partial least squares regression (PLSR). Additionally, detection has been attempted with good results using neural networks. The latter technique serves to overcome nonlinearities in the calibration/training dataset, affording more robust modelling. Finally, lock-in amplification of spectral data collected in through-transmission arrangement has been shown to allow detection at SNR as low as -60dB. The work has been shown to both allow detection of specific chemicals concealed behind a single intervening layer of fabric material, and to estimate the concentration of certain liquids.
Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J.; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2016-01-01
Alzheimer’s disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance. PMID:27499829
NASA Astrophysics Data System (ADS)
Tawakkol, Shereen M.; Farouk, M.; Elaziz, Omar Abd; Hemdan, A.; Shehata, Mostafa A.
2014-12-01
Three simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the simultaneous determination of Moexipril (MOX) and Hydrochlorothiazide (HCTZ) in pharmaceutical dosage form. The first method is the new extended ratio subtraction method (EXRSM) coupled to ratio subtraction method (RSM) for determination of both drugs in commercial dosage form. The second and third methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 10-60 and 2-30 for MOX and HCTZ in EXRSM method, respectively, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.
Tawakkol, Shereen M; Farouk, M; Elaziz, Omar Abd; Hemdan, A; Shehata, Mostafa A
2014-12-10
Three simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the simultaneous determination of Moexipril (MOX) and Hydrochlorothiazide (HCTZ) in pharmaceutical dosage form. The first method is the new extended ratio subtraction method (EXRSM) coupled to ratio subtraction method (RSM) for determination of both drugs in commercial dosage form. The second and third methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 10-60 and 2-30 for MOX and HCTZ in EXRSM method, respectively, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.
Santa-Cruz, Pablo; García-Reiriz, Alejandro
2014-10-01
In the present work a new application of third-order multivariate calibration algorithms is presented, in order to quantify carbaryl, naphthol and propoxur using kinetic spectroscopic data. The time evolution of fluorescence data matrices was measured, in order to follow the alkaline hydrolysis of the pesticides mentioned above. This experimental system has the additional complexity that one of the analytes is the reaction product of another analyte, and this fact generates linear dependency problems between concentration profiles. The data were analyzed by three different methods: parallel factor analysis (PARAFAC), unfolded partial least-squares (U-PLS) and multi-dimensional partial least-squares (N-PLS); these last two methods were assisted with residual trilinearization (RTL) to model the presence of unexpected signals not included in the calibration step. The ability of the different algorithms to predict analyte concentrations was checked with validation samples. Samples with unexpected components, tiabendazole and carbendazim, were prepared and spiked water samples of a natural stream were used to check the recovered concentrations. The best results were obtained with U-PLS/RTL and N-PLS/RTL with an average of the limits of detection of 0.035 for carbaryl, 0.025 for naphthol and 0.090 for propoxur (mg L(-1)), because these two methods are more flexible regarding the structure of the data.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika
2011-02-15
A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. PMID:21238758
Calibration and integrity verification techniques applied to GPS simulators
NASA Astrophysics Data System (ADS)
Stulken, D. A.
Automated calibration and signal verification techniques which are used in GPS simulators to ensure a high level of fidelity of the test stimulus employed in evaluating the performance of GPS receivers have been developed. The present techniques involve satellite signal power levels, jammer signal power levels, time of arrival of satellite signals, and the coordinated timing of simulated satellite signals with respect to the simulation of the host vehicle interface signals. From initial simulation and evaluation system design efforts, a new family of GPS RF signal generators were developed, the multiple channel signal generator and the single channel signal generator.
Wiberg, Kent; Hagman, Anders; Jacobsson, Sven P
2003-01-01
A new method for the rapid determination of pharmaceutical solutions is proposed. A conventional HPLC system with a Diode Array Detector (DAD) was used with no chromatographic column connected. As eluent, purified water (Milli Q) was used. The pump and autosampler of the HPLC system were mainly utilised as an automatic and convenient way of introducing the sample into the DAD. The method was tested on the local anaesthetic compound lidocaine. The UV spectrum (245-290 nm) from the samples analysed in the detector was used for multivariate calibration for the determination of lidocaine solutions. The content was determined with PLS regression. The effect on the predictive ability of three factors: flow, data-collection rate and rise time as well as two ways of exporting a representative UV spectrum from the DAD file collected was investigated by means of an experimental design comprising 11 experiments. For each experiment, 14 solutions containing a known content of lidocaine were analysed (0.02-0.2 mg ml(-1)). From these 14 samples two calibration sets and two test sets were made and as the response in the experimental design the Root Mean Square Error of Prediction (RMSEP) values from the predictions of the two test sets were used. When the factor setting giving the lowest RMSEP was found, this setting was used when analysing a new calibration set of 12 lidocaine samples (0.1-0.2 mg ml(-1)). This calibration model was validated by two external test sets, A and B, analysed on separate occasions for the evaluation of repeatability (test set A) and determination over time (test set B). For comparison, the reference method, liquid chromatography, was also used for analysis of the ten samples in test set B. This comparison of the two methods was done twice on different occasions. The results show that in respect of accuracy, precision and repeatability the new method is comparable to the reference method. The main advantages compared with liquid chromatography are the
Wiberg, Kent; Hagman, Anders; Jacobsson, Sven P
2003-01-01
A new method for the rapid determination of pharmaceutical solutions is proposed. A conventional HPLC system with a Diode Array Detector (DAD) was used with no chromatographic column connected. As eluent, purified water (Milli Q) was used. The pump and autosampler of the HPLC system were mainly utilised as an automatic and convenient way of introducing the sample into the DAD. The method was tested on the local anaesthetic compound lidocaine. The UV spectrum (245-290 nm) from the samples analysed in the detector was used for multivariate calibration for the determination of lidocaine solutions. The content was determined with PLS regression. The effect on the predictive ability of three factors: flow, data-collection rate and rise time as well as two ways of exporting a representative UV spectrum from the DAD file collected was investigated by means of an experimental design comprising 11 experiments. For each experiment, 14 solutions containing a known content of lidocaine were analysed (0.02-0.2 mg ml(-1)). From these 14 samples two calibration sets and two test sets were made and as the response in the experimental design the Root Mean Square Error of Prediction (RMSEP) values from the predictions of the two test sets were used. When the factor setting giving the lowest RMSEP was found, this setting was used when analysing a new calibration set of 12 lidocaine samples (0.1-0.2 mg ml(-1)). This calibration model was validated by two external test sets, A and B, analysed on separate occasions for the evaluation of repeatability (test set A) and determination over time (test set B). For comparison, the reference method, liquid chromatography, was also used for analysis of the ten samples in test set B. This comparison of the two methods was done twice on different occasions. The results show that in respect of accuracy, precision and repeatability the new method is comparable to the reference method. The main advantages compared with liquid chromatography are the
Differential Evolution algorithm applied to FSW model calibration
NASA Astrophysics Data System (ADS)
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Multivariate analyses applied to fetal, neonatal and pediatric MRI of neurodevelopmental disorders
Levman, Jacob; Takahashi, Emi
2015-01-01
Multivariate analysis (MVA) is a class of statistical and pattern recognition methods that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of medical neuroimaging-related challenges including identifying variables associated with a measure of clinical importance (i.e. patient outcome), creating diagnostic tests, assisting in characterizing developmental disorders, understanding disease etiology, development and progression, assisting in treatment monitoring and much more. Compared to adults, imaging of developing immature brains has attracted less attention from MVA researchers. However, remarkable MVA research growth has occurred in recent years. This paper presents the results of a systematic review of the literature focusing on MVA technologies applied to neurodevelopmental disorders in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. The goal of this manuscript is to provide a concise review of the state of the scientific literature on studies employing brain MRI and MVA in a pre-adult population. Neurological developmental disorders addressed in the MVA research contained in this review include autism spectrum disorder, attention deficit hyperactivity disorder, epilepsy, schizophrenia and more. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in pediatric/neonatal/fetal brain MRI, the field is still young and considerable research growth remains ahead of us. PMID:26640765
Multivariate Analyses Applied to Healthy Neurodevelopment in Fetal, Neonatal, and Pediatric MRI
Levman, Jacob; Takahashi, Emi
2016-01-01
Multivariate analysis (MVA) is a class of statistical and pattern recognition techniques that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of neurological medical imaging related challenges including the evaluation of healthy brain development, the automated analysis of brain tissues and structures through image segmentation, evaluating the effects of genetic and environmental factors on brain development, evaluating sensory stimulation's relationship with functional brain activity and much more. Compared to adult imaging, pediatric, neonatal and fetal imaging have attracted less attention from MVA researchers, however, recent years have seen remarkable MVA research growth in pre-adult populations. This paper presents the results of a systematic review of the literature focusing on MVA applied to healthy subjects in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in brain MRI, the field is still young and significant research growth will continue into the future. PMID:26834576
Gries, J M; Verotta, D
2000-08-01
In a frequently performed pharmacokinetics study, different subjects are given different doses of a drug. After each dose is given, drug concentrations are observed according to the same sampling design. The goal of the experiment is to obtain a representation for the pharmacokinetics of the drug, and to determine if drug concentrations observed at different times after a dose are linear in respect to dose. The goal of this paper is to obtain a representation for concentration as a function of time and dose, which (a) makes no assumptions on the underlying pharmacokinetics of the drug; (b) takes into account the repeated measure structure of the data; and (c) detects nonlinearities in respect to dose. To address (a) we use a multivariate adaptive regression splines representation (MARS), which we recast into a linear mixed-effects model, addressing (b). To detect nonlinearity we describe a general algorithm that obtains nested (mixed-effect) MARS representations. In the pharmacokinetics application, the algorithm obtains representations containing time, and time and dose, respectively, with the property that the bases functions of the first representation are a subset of the second. Standard statistical model selection criteria are used to select representations linear or nonlinear in respect to dose. The method can be applied to a variety of pharmacokinetics (and pharmacodynamic) preclinical and phase I-III trials. Examples of applications of the methodology to real and simulated data are reported.
Multivariate analyses applied to fetal, neonatal and pediatric MRI of neurodevelopmental disorders.
Levman, Jacob; Takahashi, Emi
2015-01-01
Multivariate analysis (MVA) is a class of statistical and pattern recognition methods that involve the processing of data that contains multiple measurements per sample. MVA can be used to address a wide variety of medical neuroimaging-related challenges including identifying variables associated with a measure of clinical importance (i.e. patient outcome), creating diagnostic tests, assisting in characterizing developmental disorders, understanding disease etiology, development and progression, assisting in treatment monitoring and much more. Compared to adults, imaging of developing immature brains has attracted less attention from MVA researchers. However, remarkable MVA research growth has occurred in recent years. This paper presents the results of a systematic review of the literature focusing on MVA technologies applied to neurodevelopmental disorders in fetal, neonatal and pediatric magnetic resonance imaging (MRI) of the brain. The goal of this manuscript is to provide a concise review of the state of the scientific literature on studies employing brain MRI and MVA in a pre-adult population. Neurological developmental disorders addressed in the MVA research contained in this review include autism spectrum disorder, attention deficit hyperactivity disorder, epilepsy, schizophrenia and more. While the results of this review demonstrate considerable interest from the scientific community in applications of MVA technologies in pediatric/neonatal/fetal brain MRI, the field is still young and considerable research growth remains ahead of us. PMID:26640765
Multivariate Curve Resolution Applied to Hyperspectral Imaging Analysis of Chocolate Samples.
Zhang, Xin; de Juan, Anna; Tauler, Romà
2015-08-01
This paper shows the application of Raman and infrared hyperspectral imaging combined with multivariate curve resolution (MCR) to the analysis of the constituents of commercial chocolate samples. The combination of different spectral data pretreatment methods allowed decreasing the high fluorescent Raman signal contribution of whey in the investigated chocolate samples. Using equality constraints during MCR analysis, estimations of the pure spectra of the chocolate sample constituents were improved, as well as their relative contributions and their spatial distribution on the analyzed samples. In addition, unknown constituents could be also resolved. White chocolate constituents resolved from Raman hyperspectral image indicate that, at macro scale, sucrose, lactose, fat, and whey constituents were intermixed in particles. Infrared hyperspectral imaging did not suffer from fluorescence and could be applied for white and milk chocolate. As a conclusion of this study, micro-hyperspectral imaging coupled to the MCR method is confirmed to be an appropriate tool for the direct analysis of the constituents of chocolate samples, and by extension, it is proposed for the analysis of other mixture constituents in commercial food samples.
Multivariable control theory applied to hierarchial attitude control for planetary spacecraft
NASA Technical Reports Server (NTRS)
Boland, J. S., III; Russell, D. W.
1972-01-01
Multivariable control theory is applied to the design of a hierarchial attitude control system for the CARD space vehicle. The system selected uses reaction control jets (RCJ) and control moment gyros (CMG). The RCJ system uses linear signal mixing and a no-fire region similar to that used on the Skylab program; the y-axis and z-axis systems which are coupled use a sum and difference feedback scheme. The CMG system uses the optimum steering law and the same feedback signals as the RCJ system. When both systems are active the design is such that the torques from each system are never in opposition. A state-space analysis was made of the CMG system to determine the general structure of the input matrices (steering law) and feedback matrices that will decouple the axes. It is shown that the optimum steering law and proportional-plus-rate feedback are special cases. A derivation of the disturbing torques on the space vehicle due to the motion of the on-board television camera is presented. A procedure for computing an upper bound on these torques (given the system parameters) is included.
Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K.
2015-01-01
This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide better predictive capability while PCR exhibited a more precise estimate of loading peaks and suggests that PCR is better for model interpretation of key underlying functional groups. Specifically, when PCR was utilized, an error in peak loading of ±15 cm−1 from the true mean was quantified. Application of the first derivative appeared to assist in improving both PCR and PLS loading precision. Research results identified the wavenumbers important in the prediction of extractives, lignin, cellulose, and hemicellulose and further demonstrated the utility in FT-IR for rapid monitoring of wood chemistry. PMID:26576321
Masoum, Saeed; Mehran, Mehdi; Ghaheri, Salehe
2015-02-01
Thyme species are used in traditional medicine throughout the world and are known for their antiseptic, antispasmodic, and antitussive properties. Also, antioxidant activity is one of the interesting properties of thyme essential oil. In this research, we aim to identify peaks potentially responsible for the antioxidant activity of thyme oil from chromatographic fingerprints. Therefore, the chemical compositions of hydrodistilled essential oil of thyme species from different regions were analyzed by gas chromatography with mass spectrometry and antioxidant activities of essential oils were measured by a 1,1-diphenyl-2-picrylhydrazyl radical scavenging test. Several linear multivariate calibration techniques with different preprocessing methods were applied to the chromatograms of thyme essential oils to indicate the peaks responsible for the antioxidant activity. These techniques were applied on data both before and after alignment of chromatograms with correlation optimized warping. In this study, orthogonal projection to latent structures model was found to be a good technique to indicate the potential antioxidant active compounds in the thyme oil due to its simplicity and repeatability.
NASA Astrophysics Data System (ADS)
Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia
2016-02-01
A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.
Farrell, Jeremy A; Higgins, Kevin; Kalivas, John H
2012-03-01
Determining active pharmaceutical ingredient (API) tablet concentrations rapidly and efficiently is of great importance to the pharmaceutical industry in order to assure quality control. Using near-infrared (NIR) spectra measured on tablets in conjunction with multivariate calibration has been shown to meet these objectives. However, the calibration is typically developed under one set of conditions (primary conditions) and new tablets are produced under different measurement conditions (secondary conditions). Hence, the accuracy of multivariate calibration is limited due to differences between primary and secondary conditions such as tablet variances (composition, dosage, and production processes and precision), different instruments, and/or new environmental conditions. This study evaluates application of Tikhonov regularization (TR) to update NIR calibration models developed in a controlled primary laboratory setting to predict API tablet concentrations manufactured in full production where conditions and tablets are significantly different than in the laboratory. With just a few new tablets from full production, it is found that TR provides reduced prediction errors by as much as 64% in one situation compared to no model-updating. TR prediction errors are reduced by as much as 51% compared to local centering, another calibration maintenance method. The TR updated primary models are also found to predict as well as a full calibration model formed in the secondary conditions.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. PMID:27176001
NASA Astrophysics Data System (ADS)
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Tencate, Alister J; Kalivas, John H; White, Alexander J
2016-05-19
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping
2016-04-15
Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R(2)) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L.
NASA Astrophysics Data System (ADS)
Wang, Lei; Cao, Peng; Li, Wei; Tong, Peijin; Zhang, Xiaofang; Du, Yiping
2016-04-01
Solid Phase Extraction Spectroscopy (SPES) developed in this paper is a technique to measure spectrum directly on the solid phase material where the analytes are concentrated in SPE process. Membrane enrichment and UV-Visible spectroscopy were utilized to fulfill SPES, and multivariate calibration method of partial least squares (PLS) was used to simultaneously detect the concentrations of trace cobalt (II) and zinc (II) in water samples. The proposed method is simple, sensitive and selective. The complexes of analyte ions were collected on the cellulose acetate membranes via membrane filtration after the complexation reaction with 1-2-pyridylazo 2-naphthol (PAN). The spectra of the membranes which contained the complexes of metal ions and PAN were measured directly without eluting. The analytical conditions including pH, reaction time, sample volume, the amount of PAN, and flow rates were optimized. Nonionic surfactant Brij-30 was absorbed on the membranes prior to SPES to modify the membranes for improving the enrichment and spectrum measurement. The interference from other ions to the determination was investigated. Under the optimal condition, the absorbance was linearly related to the concentration at the range of 0.1-3.0 μg/L and 0.1-2.0 μg/L, with the correlation coefficients (R2) of 0.9977 and 0.9951 for Co (II) and Zn (II), respectively. The limits of detection were 0.066 μg/L for cobalt (II) and 0.104 μg/L for zinc (II). PLS regression with leave-one-out cross-validation was utilized to build models to detect cobalt (II) and zinc (II) in drinking water samples simultaneously. The correlation coefficient between ion concentration and spectrum of calibration set and independent prediction set were 1.0000 and 0.9974 for cobalt (II) and 1.0000 and 0.9956 for zinc (II). For cobalt (II) and zinc (II), the errors of the prediction set were in the range 0.0406-0.1353 μg/L and 0.0025-0.1884 μg/L.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.
Goicoechea, H C; Olivieri, A C
1999-07-12
The mucolitic bromhexine [N-(2-amino-3,5-dibromobenzyl)-N-methylcyclohexylamine] has been determined in cough suppressant syrups by multivariate spectrophotometric calibration, together with partial least-squares (PLS-1) and hybrid linear analysis (HLA). Notwithstanding the spectral overlapping between bromhexine and syrup excipients, as well as the intrinsic variability of the latter in unknown samples, the recoveries are excellent. A novel method of wavelength selection was also applied, based on the concept of net analyte signal regression, as adapted to the HLA methodology. This method allows one to improve the performance of both PLS-1 and HLA in samples containing nonmodeled interferences. PMID:18967655
NASA Astrophysics Data System (ADS)
Rientjes, T. H. M.; Muthuwatta, L. P.; Bos, M. G.; Booij, M. J.; Bhatti, H. A.
2013-11-01
A procedure is tested to complete energy balance based daily ETa series by MODIS data.The HVB model is calibrated on 2 water balance terms; ETa and stream flow (Q).HBV calibration on Q shows poor ETa results for inter-rainfall and recession periods.Multi-variable (MV) vs. single variable calibration showed best HBV performance.Large volume differences in Q and ETa do not essentially effect MV calibration.
Möltgen, C-V; Herdling, T; Reich, G
2013-11-01
This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration.
A PID de-tuned method for multivariable systems, applied for HVAC plant
NASA Astrophysics Data System (ADS)
Ghazali, A. B.
2015-09-01
A simple yet effective de-tuning of PID parameters for multivariable applications has been described. Although the method is felt to have wider application it is simulated in a 3-input/ 2-output building energy management system (BEMS) with known plant dynamics. The controller performances such as the sum output squared error and total energy consumption when the system is at steady state conditions are studied. This tuning methodology can also be extended to reduce the number of PID controllers as well as the control inputs for specified output references that are necessary for effective results, i.e. with good regulation performances being maintained.
Pedersen, Kristine B; Lejon, Tore; Jensen, Pernille E; Ottosen, Lisbeth M
2016-05-01
Multivariate methodology was employed for finding optimum remediation conditions for electrodialytic remediation of harbour sediment from an Arctic location in Norway. The parts of the experimental domain in which both sediment- and technology-specific remediation objectives were met were identified. Objectives targeted were removal of the sediment-specific pollutants Cu and Pb, while minimising the effect on the sediment matrix by limiting the removal of naturally occurring metals while maintaining low energy consumption. Two different cell designs for electrochemical remediation were tested and final concentrations of Cu and Pb were below background levels in large parts of the experimental domain when operating at low current densities (<0.12 mA/cm(2)). However, energy consumption, remediation times and the effect on naturally occurring metals were different for the 2- and 3-compartment cells. PMID:26928331
Mouton, Nicolas; Devos, Olivier; Sliwa, Michel; de Juan, Anna; Ruckebusch, Cyril
2013-07-25
The main advantage of multivariate curve resolution - alternating least squares method (MCR-ALS) is the possibility to act as multiset analysis method, combining data coming from different experiments to provide a complete and more accurate description of a chemical system. Exploiting the multiset side, the combination of experiments obtained from two photo-active systems with complementary pathways and monitored by femtosecond UV-vis transient absorption spectroscopy is presented in this work. A multiset hard- and soft-multivariate curve resolution model (HS-MCR) was built allowing the description of the spectrokinetic features of the entire system. Additionally, reaction quantum yields were incorporated in the hard-model in order to describe branching ratios for intermediate species. The photodynamics of salicylidene aniline (SA) was investigated as a case study. The overall reaction scheme involves two competitive and parallel pathways. On the one hand, a photoinduced excited state intramolecular proton transfer (ESIPT) followed by a cis-trans isomerization leads to the so-called photochromic form of the molecule, which absorbs in the visible. The formation of the photochromic species is well characterized in the literature. On the other hand, a complex internal rotation of the molecule takes place, which is a competing reaction. The rotation mechanism is based on a trans-cis isomerization. This work aimed at providing a detailed spectrokinetic characterization of both reaction pathways for SA. For this purpose, the photodynamics of two molecules of identical parent structures and different substituent patterns were investigated in femtosecond transient absorption spectroscopy. For SA, the mechanism described above involving the two parallel pathways was observed, whereas for the derivative form of SA, the photochromic reaction was blocked because of the replacement of an H atom by a methyl group. The application of MCR approaches enabled to obtain transient
Mabood, Fazal; Al-Harrasi, Ahmed; Boqué, Ricard; Jabeen, Farah; Hussain, Javid; Hafidh, A; Hind, K; Ahmed, M A G; Manzoor, A; Hussain, Hidayat; Ur Rehman, Najeeb; Iman, S H; Said, Jahina J; Hamood, Sara A
2015-01-01
A Near Infrared (NIR) spectroscopic method combined with multivariate calibration was developed for the determination of the amount of sucrose in date fruits growing in the Sultanate of Oman. In this study two groups of samples were used: one group of 48 sucrose standard solutions in the concentration range from 0.01% to 50% (w/v) and another group of 54 date fruit samples of 18 different varieties. The sucrose standard samples were split in two sets, i.e. one training set of 31 samples and one test set of 17 samples. All samples were measured with a NIR spectrophotometer in the wavelength range from 700 to 2500 nm. The spectra collected were preprocessed using baseline correction and Savitzky-Golay 1st derivative. Partial least-squares regression (PLSR) was used to build the regression model with the training set of 31 samples. This model was then validated by using random leave-one-out cross-validation. Later, the PLS regression model was externally validated by using the test set of 17 samples of known sucrose concentration. The root mean squared error of prediction (RMSEP) was found to be of 1.5%, which shows a good prediction ability of the model. Finally, the PLS model was applied to the spectra of 54 date fruit samples to quantify their sucrose amount. It was found that the Khalas, Barnia Nizwi, Ajwa Almadina, Maan, and Khunizi varieties contain high amounts of sucrose, i.e. ranging from 36% to 60%, while Naghal, Fardh, Nashu and Qash Tabaq varieties contain the least amount of sucrose, ranging from 3.5% to 8.1%.
A Field Method for Backscatter Calibration Applied to NOAA's Reson 7125 Multibeam Echo-Sounders
NASA Astrophysics Data System (ADS)
Welton, Briana
Acoustic seafloor backscatter measurements made by multiple Reson multibeam echo-sounders (MBES) used for hydrographic survey are observed to be inconsistent, affecting the quality of data products and impeding large-scale processing efforts. A method to conduct a relative inter and intea sonar calibration in the field using dual frequency Reson 7125 MBES has been developed, tested, and evaluated to improve the consistency of backscatter measurements made from multiple MBES systems. The approach is unique in that it determines a set of corrections for power, gain, pulse length, and an angle dependent calibration term relative to a single Reson 7125 MBES calibrated in an acoustic test tank. These corrections for each MBES can then be applied during processing for any acquisition setting combination. This approach seeks to reduce the need for subjective and inefficient manual data or data product manipulation during post processing, providing a foundation for improved automated seafloor characterization using data from more than one MBES system.
Capitán-Vallvey, L F; Fernández, M D; de Orbe, I; Vilchez, J L; Avidad, R
1997-04-01
A method for the simultaneous determination of the colorants Sunset Yellow FCF and Quinoline Yellow using solid-phase spectrophotometry is proposed. The colorants were isolated in Sephadex DEAE A-25 gel at pH 5.0, the gel-colorants system was packed in a 1 mm silica cell and spectra were recorded between 400 and 600 nm against a blank. Statistical results were obtained by partial least squares (PLS) multivariate calibration. The optimized matrix by using the PLS-2 method enables the determination of the colorants in artificial mixtures and commercial soft drinks.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. PMID:27423110
NASA Astrophysics Data System (ADS)
Chen, Quan; Kissel, Catherine; Govin, Aline; Liu, Zhifei; Xie, Xin
2016-05-01
Fast and nondestructive X-ray fluorescence (XRF) core scanning provides high-resolution element data that are widely used in paleoclimate studies. However, various matrix and specimen effects prevent the use of semiquantitative raw XRF core-scanning intensities for robust paleoenvironmental interpretations. We present here a case study of a 50.8 m-long piston Core MD12-3432 retrieved from the northern South China Sea. The absorption effect of interstitial water is identified as the major source of deviations between XRF core-scanning intensities and measured element concentrations. The existing two calibration methods, i.e., normalized median-scaled calibration (NMS) and multivariate log-ratio calibration (MLC), are tested with this sequence after the application of water absorption correction. The results indicate that an improvement is still required to appropriately correct the influence of downcore changes in interstitial water content in the long sediment core. Consequently, we implement a new polynomial water content correction in NMS and MLC methods, referred as NPS and P_MLC calibrations. Results calibrated by these two improved methods indicate that the influence of downcore water content changes is now appropriately corrected. We therefore recommend either of the two methods to be applied for robust paleoenvironmental interpretations of major elements measured by XRF-scanning in long sediment sequences with significant downcore interstitial water content changes.
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio
2016-09-01
Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%).
Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio
2016-09-01
Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%). PMID:27041319
Multivariate analysis applied to agglomerated macrobenthic data from an unpolluted estuary.
Conde, Anxo; Novais, Júlio M; Domínguez, Jorge
2013-01-01
We agglomerated species into higher taxonomic aggregations and functional groups to analyse environmental gradients in an unpolluted estuary. We then applied non-metric Multidimensional Scaling and Redundancy Analysis (RDA) for ordination of the agglomerated data matrices. The correlation between the ordinations produced by both methods was generally high. However, the performance of the RDA models depended on the data matrix used to fit the model. As a result, salinity and total nitrogen were only found significant when aggregated data matrices were used rather than species data matrix. We used the results to select a RDA model that explained a higher percentage of variance in the species data set than the parsimonious model. We conclude that the use of aggregated matrices may be considered complementary to the use of species data to obtain a broader insight into the distribution of macrobenthic assemblages in relation to environmental gradients. PMID:23684322
Multivariate analysis applied to monthly rainfall over Rio de Janeiro state, Brazil
NASA Astrophysics Data System (ADS)
Brito, Thábata T.; Oliveira-Júnior, José F.; Lyra, Gustavo B.; Gois, Givanildo; Zeri, Marcelo
2016-10-01
Spatial and temporal patterns of rainfall were identified over the state of Rio de Janeiro, southeast Brazil. The proximity to the coast and the complex topography create great diversity of rainfall over space and time. The dataset consisted of time series (1967-2013) of monthly rainfall over 100 meteorological stations. Clustering analysis made it possible to divide the stations into six groups (G1, G2, G3, G4, G5 and G6) with similar rainfall spatio-temporal patterns. A linear regression model was applied to a time series and a reference. The reference series was calculated from the average rainfall within a group, using nearby stations with higher correlation (Pearson). Based on t-test (p < 0.05) all stations had a linear spatiotemporal trend. According to the clustering analysis, the first group (G1) contains stations located over the coastal lowlands and also over the ocean facing area of Serra do Mar (Sea ridge), a 1500 km long mountain range over the coastal Southeastern Brazil. The second group (G2) contains stations over all the state, from Serra da Mantiqueira (Mantiqueira Mountains) and Costa Verde (Green coast), to the south, up to stations in the Northern parts of the state. Group 3 (G3) contains stations in the highlands over the state (Serrana region), while group 4 (G4) has stations over the northern areas and the continent-facing side of Serra do Mar. The last two groups were formed with stations around Paraíba River (G5) and the metropolitan area of the city of Rio de Janeiro (G6). The driest months in all regions were June, July and August, while November, December and January were the rainiest months. Sharp transitions occurred when considering monthly accumulated rainfall: from January to February, and from February to March, likely associated with episodes of "veranicos", i.e., periods of 4-15 days of duration with no rainfall.
Calibration methodology for proportional counters applied to yield measurements of a neutron burst.
Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo
2014-01-01
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.
Li, Weiyong; Worosila, Gregory D
2005-05-13
This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-01
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. PMID:27380302
Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton
2015-09-01
Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples.
Michałowska-Kaczmarczyk, Anna Maria; Asuero, Agustin G; Martin, Julia; Alonso, Esteban; Jurado, Jose Marcos; Michałowski, Tadeusz
2014-12-01
Rational functions of the Padé type are used for the calibration curve (CCM), and standard addition (SAM) methods purposes. In this paper, the related functions were applied to results obtained from the analyses of (a) nickel with use of FAAS method, (b) potassium according to FAES method, and (c) salicylic acid according to HPLC-MS/MS method. A uniform, integral criterion of nonlinearity of the curves, obtained according to CCM and SAM, is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area.
Pereira, Claudete Fernandes; Pasquini, Celio
2010-05-01
A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models. PMID:20482969
NASA Astrophysics Data System (ADS)
Barbeira, Paulo J. S.; Paganotti, Rosilene S. N.; Ássimos, Ariane A.
2013-10-01
This study had the objective of determining the content of dry extract of commercial alcoholic extracts of bee propolis through Partial Least Squares (PLS) multivariate calibration and electronic spectroscopy. The PLS model provided a good prediction of dry extract content in commercial alcoholic extracts of bee propolis in the range of 2.7 a 16.8% (m/v), presenting the advantage of being less laborious and faster than the traditional gravimetric methodology. The PLS model was optimized with outlier detection tests according to the ASTM E 1655-05. In this study it was possible to verify that a centrifugation stage is extremely important in order to avoid the presence of waxes, resulting in a more accurate model. Around 50% of the analyzed samples presented content of dry extract lower than the value established by Brazilian legislation, in most cases, the values found were different from the values claimed in the product's label.
Sessa, Clarimma; Bagán, Héctor; García, Jose Francisco
2014-10-01
Mid-infrared fiberoptics reflectance spectroscopy (mid-IR FORS) is a very interesting technique for artwork characterization purposes. However, the fact that the spectra obtained are a mixture of surface (specular) and volume (diffuse) reflection is a significant drawback. The physical and chemical features of the artwork surface may produce distortions in the spectra that hinder comparison with reference databases acquired in transmission mode. Several studies attempted to understand the influence of the different variables and propose procedures to improve the interpretation of the spectra. This article is focused on the application of mid-IR FORS and multivariate calibration to the analysis of easel paintings. The objectives are the evaluation of the influence of the surface roughness on the spectra, the influence of the matrix composition for the classification of unknown spectra, and the capability of obtaining pigment composition mappings. A first evaluation of a fast procedure for spectra management and pigment discrimination is discussed. The results demonstrate the capability of multivariate methods, principal component analysis (PCA), and partial least squares discrimination analysis (PLS-DA), to model the distortions of the reflectance spectra and to delimitate and discriminate areas of uniform composition. The roughness of the painting surface is found to be an important factor affecting the shape and relative intensity of the spectra. A mapping of the major pigments of a painting is possible using mid-IR FORS and PLS-DA when the calibration set is a palette that includes the potential pigments present in the artwork mixed with the appropriate binder and that shows the different paint textures.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun
2013-11-01
In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.
Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson
2015-01-01
A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g−1 was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm−1). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe. PMID:25861516
De Almeida Brehm, Franciane; de Azevedo, Julio Cesar R; da Costa Pereira, Jorge; Burrows, Hugh D
2015-11-01
Dissolved organic carbon (DOC) is frequently used as a diagnostic parameter for the identification of environmental contamination in aqueous systems. Since this organic matter is evolving and decaying over time. If samples are collected under environmental conditions, some sample stabilization process is needed until the corresponding analysis can be made. This may affect the analysis results. This problem can be avoided using the direct determination of DOC. We report a study using in situ synchronous fluorescence spectra, with independent component analysis to retrieve relevant major spectral contributions and their respective component contributions, for the direct determination of DOC. Fluorescence spectroscopy is a very powerful and sensitive technique to evaluate vestigial organic matter dissolved in water and is thus suited for the analytical task of direct monitoring of dissolved organic matter in water, thus avoiding the need for the stabilization step. We also report the development of an accurate calibration model for dissolved organic carbon determinations using environmental samples of humic and fulvic acids. The method described opens the opportunity for a fast, in locus, DOC estimation in environmental or other field studies using a portable fluorescence spectrometer. This combines the benefits of the use of fresh samples, without the need of stabilizers, and also allows the interpretation of various additional spectral contributions based on their respective estimated properties. We show how independent component analysis may be used to describe tyrosine, tryptophan, humic acid and fulvic acid spectra and, thus, to retrieve the respective individual component contribution to the DOC. PMID:26497563
Marques Junior, Jucelino Medeiros; Muller, Aline Lima Hermes; Foletto, Edson Luiz; da Costa, Adilson Ben; Bizzi, Cezar Augusto; Irineu Muller, Edson
2015-01-01
A method for determination of propranolol hydrochloride in pharmaceutical preparation using near infrared spectrometry with fiber optic probe (FTNIR/PROBE) and combined with chemometric methods was developed. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). The treatments based on the mean centered data and multiplicative scatter correction (MSC) were selected for models construction. A root mean square error of prediction (RMSEP) of 8.2 mg g(-1) was achieved using siPLS (s2i20PLS) algorithm with spectra divided into 20 intervals and combination of 2 intervals (8501 to 8801 and 5201 to 5501 cm(-1)). Results obtained by the proposed method were compared with those using the pharmacopoeia reference method and significant difference was not observed. Therefore, proposed method allowed a fast, precise, and accurate determination of propranolol hydrochloride in pharmaceutical preparations. Furthermore, it is possible to carry out on-line analysis of this active principle in pharmaceutical formulations with use of fiber optic probe.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut
2005-04-01
Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Michałowski, Tadeusz; Pilarski, Bogusław; Michałowska-Kaczmarczyk, Anna M; Kukwa, Agata
2014-06-01
Some rational functions of the Padé type, y=y(x; n,m), were applied to the calibration curve method (CCM), and compared with a parabolic function. The functions were tested on the results obtained from calibration of ion-selective electrodes: NH4-ISE, Ca-ISE, and F-ISE. A validity of the functions y=y(x; 2,1), y=y(x; 1,1), and y=y(x; 2,0) (parabolic) was compared. A uniform, integral criterion of nonlinearity of calibration curves is suggested. This uniformity is based on normalization of the approximating functions within the frames of a unit area.
Moncada, Guillermo Wells; González Martín, Ma Inmaculada; Escuredo, Olga; Fischer, Susana; Míguez, Montserrat
2013-11-15
Quinoa is a pseudocereal that is grown mainly in the Andes. It is a functional food supplement and ingredient in the preparation of highly nutritious food. In this paper we evaluate the potential of near infrared spectroscopy (NIR) for the determination of vitamin E and antioxidant capacity in the quinoa as total phenol content (TPC), radical scavenging activity by DPPH (2,2-diphenyl-2-picryl-hydrazyl) and cupric reducing antioxidant capacity (CUPRAC) expressed as gallic acid equivalent (GAE). For recording NIR a fiber optic remote reflectance probe applied directly on the quinoa samples without treatment was used. The regression method used was modified partial least squares (MPLS). The multiple correlation coefficients (RSQ) and the standard prediction error corrected (SEP(C)) were for the vitamin E (0.841 and 1.70 mg 100 g(-1)) and for the antioxidants TPC (0.947 and 0.08 mg GAE g(-1)), DPPH radical (0.952 and 0.23 mg GAE g(-1)) and CUPRAC ( 0.623 and 0.21 mg GAE g(-1)), respectively. The prediction capacity of the model developed measured by the ratio performance deviation (RPD) for vitamin E (2.51), antioxidants TPC (4.33), DPPH radical (4.55) and CUPRAC (1.55) indicated that NIRS with a fiber optic probe provides an alternative for the determination of vitamin E and antioxidant properties of the quinoa, with a lower cost, higher speed and results comparable with the chemical methods.
Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A
2014-04-01
Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Calibration and uncertainty issues of a hydrological model (SWAT) applied to West Africa
NASA Astrophysics Data System (ADS)
Schuol, J.; Abbaspour, K. C.
2006-09-01
Distributed hydrological models like SWAT (Soil and Water Assessment Tool) are often highly over-parameterized, making parameter specification and parameter estimation inevitable steps in model calibration. Manual calibration is almost infeasible due to the complexity of large-scale models with many objectives. Therefore we used a multi-site semi-automated inverse modelling routine (SUFI-2) for calibration and uncertainty analysis. Nevertheless, the question of when a model is sufficiently calibrated remains open, and requires a project dependent definition. Due to the non-uniqueness of effective parameter sets, parameter calibration and prediction uncertainty of a model are intimately related. We address some calibration and uncertainty issues using SWAT to model a four million km2 area in West Africa, including mainly the basins of the river Niger, Volta and Senegal. This model is a case study in a larger project with the goal of quantifying the amount of global country-based available freshwater. Annual and monthly simulations with the "calibrated" model for West Africa show promising results in respect of the freshwater quantification but also point out the importance of evaluating the conceptual model uncertainty as well as the parameter uncertainty.
Cantarelli, Miguel A; Pellerano, Roberto G; Marchevsky, Eduardo J; Camiña, José M
2008-10-22
A new method to determine a mixture for sweetener sodium saccharin and aspartame in commercial noncaloric sweeteners is proposed. A classical full factorial design for standards was used in the calibration step to build the partial least-squares (PLS-2) model. Instrumental data were obtained by means of UV-visible spectrophotometry. Salicylic acid was used as an internal standard to evaluate the adjustment of the real samples to the PLS model. The concentration of analytes in the commercial samples was evaluated using the obtained model by UV spectral data. The PLS-2 method was validated by capillary zone electrophoresis (CZE), finding in all cases a relative error of less than 11% between the PLS-2 and the CZE methods. The proposed procedure was applied successfully to the determination of saccharin and aspartame in noncaloric commercial sweeteners.
Lefèvre, Thomas; Rondet, Claire; Parizot, Isabelle; Chauvin, Pierre
2014-01-01
Background Cost containment policies and the need to satisfy patients’ health needs and care expectations provide major challenges to healthcare systems. Identification of homogeneous groups in terms of healthcare utilisation could lead to a better understanding of how to adjust healthcare provision to society and patient needs. Methods This study used data from the third wave of the SIRS cohort study, a representative, population-based, socio-epidemiological study set up in 2005 in the Paris metropolitan area, France. The data were analysed using a cross-sectional design. In 2010, 3000 individuals were interviewed in their homes. Non-conventional multivariate clustering techniques were used to determine homogeneous user groups in data. Multinomial models assessed a wide range of potential associations between user characteristics and their pattern of healthcare utilisation. Results We identified four distinct patterns of healthcare use. Patterns of consumption and the socio-demographic characteristics of users differed qualitatively and quantitatively between these four profiles. Extensive and intensive use by older, wealthier and unhealthier people contrasted with narrow and parsimonious use by younger, socially deprived people and immigrants. Rare, intermittent use by young healthy men contrasted with regular targeted use by healthy and wealthy women. Conclusion The use of an original technique of massive multivariate analysis allowed us to characterise different types of healthcare users, both in terms of resource utilisation and socio-demographic variables. This method would merit replication in different populations and healthcare systems. PMID:25506916
Geist, David R. ); Brown, Richard S.; Lepla, Ken; Chandler, James P.
2001-12-01
One of the practical problems with quantifying the amount of energy used by fish implanted with electromyogram (EMG) radio transmitters is that the signals emitted by the transmitter provide only a relative index of activity unless they are calibrated to the swimming speed of the fish. Ideally calibration would be conducted for each fish before it is released, but this is often not possible and calibration curves derived from more than one fish are used to interpret EMG signals from individuals which have not been calibrated. We tested the validity of this approach by comparing EMG data within three groups of three wild juvenile white sturgeon Acipenser transmontanus implanted with the same EMG radio transmitter. We also tested an additional six fish which were implanted with separate EMG transmitters. Within each group, a single EMG radio transmitter usually did not produce similar results in different fish. Grouping EMG signals among fish produced less accurate results than having individual EMG-swim speed relationships for each fish. It is unknown whether these differences were a result of different swimming performances among individual fish or inconsistencies in the placement or function of the EMG transmitters. In either case, our results suggest that caution should be used when applying calibration curves from one group of fish to another group of uncalibrated fish.
Calibration methodology for proportional counters applied to yield measurements of a neutron burst
NASA Astrophysics Data System (ADS)
Tarifeño-Saldivia, Ariel; Mayer, Roberto E.; Pavez, Cristian; Soto, Leopoldo
2015-03-01
This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3He-filled tube is obtained by using this methodology with respect to previous calibration methods.
Schenone, Agustina V; Culzoni, María J; Marsili, Nilda R; Goicoechea, Héctor C
2013-06-01
The performance of MCR-ALS was studied in the modeling of non-linear kinetic-spectrophotometric data acquired by a stopped-flow system for the quantitation of tartrazine in the presence of brilliant blue and sunset yellow FCF as possible interferents. In the present work, MCR-ALS and U-PCA/RBL were firstly applied to remove the contribution of unexpected components not included in the calibration set. Secondly, a polynomial function was used to model the non-linear data obtained by the implementation of the algorithms. MCR-ALS was the only strategy that allowed the determination of tartrazine in test samples accurately. Therefore, it was applied for the analysis of tartrazine in beverage samples with minimum sample preparation and short analysis time. The proposed method was validated by comparison with a chromatographic procedure published in the literature. Mean recovery values between 98% and 100% and relative errors of prediction values between 4% and 9% were indicative of the good performance of the method.
NASA Astrophysics Data System (ADS)
Chu, Ning; Fan, Shihua
2009-12-01
A new analytical method was developed for the simultaneous kinetic spectrophotometric determination of a quaternary carbamate pesticide mixture consisting of carbofuran, propoxur, metolcarb and fenobucarb using sequential injection analysis (SIA). The procedure was based upon the different kinetic properties between the analytes reacted with reagent in flow system in the non-stopped-flow mode, in which their hydrolysis products coupled with diazotized p-nitroaniline in an alkaline medium to form the corresponding colored complexes. The absorbance data from SIA peak time profile were recorded at 510 nm and resolved by the use of back-propagation-artificial neural network (BP-ANN) algorithms for multivariate quantitative analysis. The experimental variables and main network parameters were optimized and each of the pesticides could be determined in the concentration range of 0.5-10.0 μg mL -1, at a sampling frequency of 18 h -1. The proposed method was compared to other spectrophotometric methods for simultaneous determination of mixtures of carbamate pesticides, and it was proved to be adequately reliable and was successfully applied to the simultaneous determination of the four pesticide residues in water and fruit samples, obtaining the satisfactory results based on recovery studies (84.7-116.0%).
NASA Astrophysics Data System (ADS)
Minaya, Veronica; Corzo, Gerald; van der Kwast, Johannes; Galarraga, Remigio; Mynett, Arthur
2014-05-01
Simulations of carbon cycling are prone to uncertainties from different sources, which in general are related to input data, parameters and the model representation capacities itself. The gross carbon uptake in the cycle is represented by the gross primary production (GPP), which deals with the spatio-temporal variability of the precipitation and the soil moisture dynamics. This variability associated with uncertainty of the parameters can be modelled by multivariate probabilistic distributions. Our study presents a novel methodology that uses multivariate Copulas analysis to assess the GPP. Multi-species and elevations variables are included in a first scenario of the analysis. Hydro-meteorological conditions that might generate a change in the next 50 or more years are included in a second scenario of this analysis. The biogeochemical model BIOME-BGC was applied in the Ecuadorian Andean region in elevations greater than 4000 masl with the presence of typical vegetation of páramo. The change of GPP over time is crucial for climate scenarios of the carbon cycling in this type of ecosystem. The results help to improve our understanding of the ecosystem function and clarify the dynamics and the relationship with the change of climate variables. Keywords: multivariate analysis, Copula, BIOME-BGC, NPP, páramos
Technology Transfer Automated Retrieval System (TEKTRAN)
In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly ...
Abdel-Aziz, Omar; El Kosasy, A M; El-Sayed Okeil, S M
2014-12-10
A modified dispersive liquid-liquid extraction (DLLE) procedure coupled with spectrophotometric techniques was adopted for simultaneous determination of naphthalene, anthracene, benzo(a)pyrene, alpha-naphthol and beta-naphthol in water samples. Two different methods were used, partial least-squares (PLS) method and a new derivative ratio method, namely extended derivative ratio (EDR). A PLS-2 model was established for simultaneous determination of the studied pollutants in methanol, by using twenty mixtures as calibration set and five mixtures as validation set. Also, in methanol a novel (EDR) method was developed for determination of the studied pollutants, where each component in the mixture of the five PAHs was determined by using a mixture of the other four components as divisor. Chemometric and EDR methods could be also adopted for determination of the studied PAH in water samples after transferring them from aqueous medium to the organic one by utilizing dispersive liquid-liquid extraction technique, where different parameters were investigated using a full factorial design. Both methods were compared and the proposed method was validated according to ICH guidelines and successfully applied to determine these PAHs simultaneously in spiked water samples, where satisfactory results were obtained. All the results obtained agreed with those of published methods, where no significant difference was observed. PMID:24934969
NASA Astrophysics Data System (ADS)
Xie, Yunfei; Song, Yan; Zhang, Yong; Zhao, Bing
2010-05-01
Pefloxacin mesylate, a broad-spectrum antibacterial fluoroquinolone, has been widely used in clinical practice. Therefore, it is very important to detect the concentration of Pefloxacin mesylate. In this research, the near-infrared spectroscopy (NIRS) has been applied to quantitatively analyze on 108 injection samples, which was divided into a calibration set containing 89 samples and a prediction set containing 19 samples randomly. In order to get a satisfying result, partial least square (PLS) regression and principal components regression (PCR) have been utilized to establish quantitative models. Also, the process of establishing the models, parameters of the models, and prediction results were discussed in detail. In the PLS regression, the values of the coefficient of determination ( R2) and root mean square error of cross-validation (RMSECV) of PLS regression are 0.9263 and 0.00119, respectively. For comparison, though applying PCR method to get the values of R2 and RMSECV we obtained are 0.9685 and 0.00108, respectively. And the values of the standard error of prediction set (SEP) of PLS and PCR models are 0.001480 and 0.001140. The result of the prediction set suggests that these two quantitative analysis models have excellent generalization ability and prediction precision. However, for this PFLX injection samples, the PCR quantitative analysis model achieved more accurate results than the PLS model. The experimental results showed that NIRS together with PCR method provide rapid and accurate quantitative analysis of PFLX injection samples. Moreover, this study supplied technical support for the further analysis of other injection samples in pharmaceuticals.
Dead-blow hammer design applied to a calibration target mechanism to dampen excessive rebound
NASA Technical Reports Server (NTRS)
Lim, Brian Y.
1991-01-01
An existing rotary electromagnetic driver was specified to be used to deploy and restow a blackbody calibration target inside of a spacecraft infrared science instrument. However, this target was much more massive than any other previously inherited design applications. The target experienced unacceptable bounce when reaching its stops. Without any design modification, the momentum generated by the driver caused the target to bounce back to its starting position. Initially, elastomeric dampers were used between the driver and the target. However, this design could not prevent the bounce, and it compromised the positional accuracy of the calibration target. A design that successfully met all the requirements incorporated a sealed pocket 85 percent full of 0.75 mm diameter stainless steel balls in the back of the target to provide the effect of a dead-blow hammer. The energy dissipation resulting from the collision of balls in the pocket successfully dampened the excess momentum generated during the target deployment. The disastrous effects of new requirements on a design with a successful flight history, the modifications that were necessary to make the device work, and the tests performed to verify its functionality are described.
Merhof, Dorit; Markiewicz, Pawel J; Platsch, Günther; Declerck, Jerome; Weih, Markus; Kornhuber, Johannes; Kuwert, Torsten; Matthews, Julian C; Herholz, Karl
2011-01-01
Multivariate image analysis has shown potential for classification between Alzheimer's disease (AD) patients and healthy controls with a high-diagnostic performance. As image analysis of positron emission tomography (PET) and single photon emission computed tomography (SPECT) data critically depends on appropriate data preprocessing, the focus of this work is to investigate the impact of data preprocessing on the outcome of the analysis, and to identify an optimal data preprocessing method. In this work, technetium-99methylcysteinatedimer ((99m)Tc-ECD) SPECT data sets of 28 AD patients and 28 asymptomatic controls were used for the analysis. For a series of different data preprocessing methods, which includes methods for spatial normalization, smoothing, and intensity normalization, multivariate image analysis based on principal component analysis (PCA) and Fisher discriminant analysis (FDA) was applied. Bootstrap resampling was used to investigate the robustness of the analysis and the classification accuracy, depending on the data preprocessing method. Depending on the combination of preprocessing methods, significant differences regarding the classification accuracy were observed. For (99m)Tc-ECD SPECT data, the optimal data preprocessing method in terms of robustness and classification accuracy is based on affine registration, smoothing with a Gaussian of 12 mm full width half maximum, and intensity normalization based on the 25% brightest voxels within the whole-brain region.
Fan, Shou-Zen; Shieh, Jiann-Shing
2014-01-01
We compare type-1 and type-2 self-organizing fuzzy logic controller (SOFLC) using expert initialized and pretrained extracted rule-bases applied to automatic control of anaesthesia during surgery. We perform experimental simulations using a nonfixed patient model and signal noise to account for environmental and patient drug interaction uncertainties. The simulations evaluate the performance of the SOFLCs in their ability to control anesthetic delivery rates for maintaining desired physiological set points for muscle relaxation and blood pressure during a multistage surgical procedure. The performances of the SOFLCs are evaluated by measuring the steady state errors and control stabilities which indicate the accuracy and precision of control task. Two sets of comparisons based on using expert derived and extracted rule-bases are implemented as Wilcoxon signed-rank tests. Results indicate that type-2 SOFLCs outperform type-1 SOFLC while handling the various sources of uncertainties. SOFLCs using the extracted rules are also shown to outperform those using expert derived rules in terms of improved control stability. PMID:25587533
NASA Astrophysics Data System (ADS)
Giachetti, A.; Daffara, C.; Reghelin, C.; Gobbetti, E.; Pintus, R.
2015-06-01
In this paper we analyze some problems related to the acquisition of multiple illumination images for Polynomial Texture Maps (PTM) or generic Reflectance Transform Imaging (RTI). We show that intensity and directionality nonuniformity can be a relevant issue when acquiring manual sets of images with the standard highlight-based setup both using a flash lamp and a LED light. To maintain a cheap and flexible acquisition setup that can be used on field and by non-experienced users we propose to use a dynamic calibration and correction of the lights based on multiple intensity and direction estimation around the imaged object during the acquisition. Preliminary tests on the results obtained have been performed by acquiring a specifically designed 3D printed pattern in order to see the accuracy of the acquisition obtained both for spatial discrimination of small structures and normal estimation, and on samples of different types of paper in order to evaluate material discrimination. We plan to design and build from our analysis and from the tools developed and under development a set of novel procedures and guidelines that can be used to turn the cheap and common RTI acquisition setup from a simple way to enrich object visualization into a powerful method for extracting quantitative characterization both of surface geometry and of reflective properties of different materials. These results could have relevant applications in the Cultural Heritage domain, in order to recognize different materials used in paintings or investigate the ageing status of artifacts' surface.
Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E
2016-07-01
Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin. PMID:27108784
NASA Astrophysics Data System (ADS)
Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik
2016-04-01
Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.
Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E
2016-07-01
Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Hamchevici, Carmen; Udrea, Ion
2013-11-01
The concept of basin-wide Joint Danube Survey (JDS) was launched by the International Commission for the Protection of the Danube River (ICPDR) as a tool for investigative monitoring under the Water Framework Directive (WFD), with a frequency of 6 years. The first JDS was carried out in 2001 and its success in providing key information for characterisation of the Danube River Basin District as required by WFD lead to the organisation of the second JDS in 2007, which was the world's biggest river research expedition in that year. The present paper presents an approach for improving the survey strategy for the next planned survey JDS3 (2013) by means of several multivariate statistical techniques. In order to design the optimum structure in terms of parameters and sampling sites, principal component analysis (PCA), factor analysis (FA) and cluster analysis were applied on JDS2 data for 13 selected physico-chemical and one biological element measured in 78 sampling sites located on the main course of the Danube. Results from PCA/FA showed that most of the dataset variance (above 75%) was explained by five varifactors loaded with 8 out of 14 variables: physical (transparency and total suspended solids), relevant nutrients (N-nitrates and P-orthophosphates), feedback effects of primary production (pH, alkalinity and dissolved oxygen) and algal biomass. Taking into account the representation of the factor scores given by FA versus sampling sites and the major groups generated by the clustering procedure, the spatial network of the next survey could be carefully tailored, leading to a decreasing of sampling sites by more than 30%. The approach of target oriented sampling strategy based on the selected multivariate statistics can provide a strong reduction in dimensionality of the original data and corresponding costs as well, without any loss of information.
Gómez Alvarez, Elena; Moreno, Mónica Vázquez; Gligorovski, Sasho; Wortham, Henri; Cases, Miguel Valcárcel
2012-01-15
A characterisation of a system designed for active sampling of gaseous compounds with Solid Phase Microextraction (SPME) fibres is described. This form of sampling is useful to automate sampling while considerably reducing the sampling times. However, the efficiency of this form of sampling is also prone to be affected by certain undesirable effects such as fibre saturation, competition or displacement effects between analytes, to which particular attention should be paid especially at high flow rates. Yet, the effect of different parameters on the quantitivity of the results has not been evaluated. For this reason, in this study a careful characterisation of the influence of the parameters involved in active sampling SPME has been performed. A versatile experimental set-up has been designed to test the influence of air velocities and fluid regime on the quantitivity and reproducibility of the results. The mathematical model applied to the calculation of physical parameters at the sampling points takes into consideration the inherent characteristics of gases, distinctive from liquids and makes use of easily determined experimental variables as initial/boundary conditions to get the model started. The studies were carried out in the high-volume outdoor environmental chambers, EUPHORE. The sample subjected to study was a mixture of three aldehydes: pentanal, hexanal and heptanal and the determination methodology was O-(2,3,4,5,6-pentafluorobenzyl)-hydroxylamine hydrochloride (PFBHA) on-fibre derivatisation. The present work proves that the determination procedure is quantitative and sensitive, independent from experimental conditions: temperature, relative humidity or ozone levels. With our methodology, the influence on adsorption of three inter-related variables, i.e., air velocity, flow rate and Reynolds numbers can be separated, since a change can be exerted in one of them while keeping the others constant.
Botelho, Bruno G; de Assis, Luciana P; Sena, Marcelo M
2014-09-15
This paper proposed a novel methodology for the quantification of an artificial dye, sunset yellow (SY), in soft beverages, using image analysis (RGB histograms) and partial least squares regression. The developed method presented many advantages if compared with alternative methodologies, such as HPLC and UV/VIS spectrophotometry. It was faster, did not require sample pretreatment steps or any kind of solvents and reagents, and used a low cost equipment, a commercial flatbed scanner. This method was able to quantify SY in isotonic drinks and orange sodas, in the range of 7.8-39.7 mg L(-1), with relative prediction errors lower than 10%. A multivariate validation was also performed according to the Brazilian and international guidelines. Linearity, accuracy, sensitivity, bias, prediction uncertainty and a recently proposed tool, the β-expectation tolerance intervals, were estimated. The application of digital images in food analysis is very promising, opening the possibility for automation.
Botelho, Bruno G; de Assis, Luciana P; Sena, Marcelo M
2014-09-15
This paper proposed a novel methodology for the quantification of an artificial dye, sunset yellow (SY), in soft beverages, using image analysis (RGB histograms) and partial least squares regression. The developed method presented many advantages if compared with alternative methodologies, such as HPLC and UV/VIS spectrophotometry. It was faster, did not require sample pretreatment steps or any kind of solvents and reagents, and used a low cost equipment, a commercial flatbed scanner. This method was able to quantify SY in isotonic drinks and orange sodas, in the range of 7.8-39.7 mg L(-1), with relative prediction errors lower than 10%. A multivariate validation was also performed according to the Brazilian and international guidelines. Linearity, accuracy, sensitivity, bias, prediction uncertainty and a recently proposed tool, the β-expectation tolerance intervals, were estimated. The application of digital images in food analysis is very promising, opening the possibility for automation. PMID:24767041
Voronov, Alexey; Urakawa, Atsushi; van Beek, Wouter; Tsakoumis, Nikolaos E; Emerich, Hermann; Rønning, Magnus
2014-08-20
Large datasets containing many spectra commonly associated with in situ or operando experiments call for new data treatment strategies as conventional scan by scan data analysis methods have become a time-consuming bottleneck. Several convenient automated data processing procedures like least square fitting of reference spectra exist but are based on assumptions. Here we present the application of multivariate curve resolution (MCR) as a blind-source separation method to efficiently process a large data set of an in situ X-ray absorption spectroscopy experiment where the sample undergoes a periodic concentration perturbation. MCR was applied to data from a reversible reduction-oxidation reaction of a rhenium promoted cobalt Fischer-Tropsch synthesis catalyst. The MCR algorithm was capable of extracting in a highly automated manner the component spectra with a different kinetic evolution together with their respective concentration profiles without the use of reference spectra. The modulative nature of our experiments allows for averaging of a number of identical periods and hence an increase in the signal to noise ratio (S/N) which is efficiently exploited by MCR. The practical and added value of the approach in extracting information from large and complex datasets, typical for in situ and operando studies, is highlighted. PMID:25086889
Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir
2014-01-01
Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations.
Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir
2014-01-01
Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations. PMID:24532460
Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr
2016-01-01
Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236
Darwish, Hany W; Elzanfaly, Eman S; Saad, Ahmed S; Abdelaleem, Abdelaziz El-Bayoumi
2016-12-01
Five different chemometric methods were developed for the simultaneous determination of betamethasone dipropionate (BMD), clotrimazole (CT) and benzyl alcohol (BA) in their combined dosage form (Lotriderm® cream). The applied methods included three full spectrum based chemometric techniques; namely principal component regression (PCR), Partial Least Squares (PLS) and Artificial Neural Networks (ANN), while the other two methods were PLS and ANN preceded by genetic algorithm procedure (GA-PLS and GA-ANN) as a wavelength selection procedure. A multilevel multifactor experimental design was adopted for proper construction of the models. A validation set composed of 12 mixtures containing different ratios of the three analytes was used to evaluate the predictive power of the suggested models. All the proposed methods except ANN, were successfully applied for the analysis of their pharmaceutical formulation (Lotriderm® cream). Results demonstrated the efficiency of the four methods as quantitative tool for analysis of the three analytes without prior separation procedures and without any interference from the co-formulated excipient. Additionally, the work highlighted the effect of GA on increasing the predictive power of PLS and ANN models. PMID:27327260
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Prabhakaran, R.; Tompkins, S.
1987-01-01
The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.
Ebrahimabadi, Ebrahim H; Ghoreishi, Sayed Mehdi; Masoum, Saeed; Ebrahimabadi, Abdolrasoul H
2016-01-01
Myrtus communis L. is an aromatic evergreen shrub and its essential oil possesses known powerful antimicrobial activity. However, the contribution of each component of the plant essential oil in observed antimicrobial ability is unclear. In this study, chemical components of the essential oil samples of the plant were identified qualitatively and quantitatively using GC/FID/Mass spectrometry system, antimicrobial activity of these samples against three microbial strains were evaluated and, these two set of data were correlated using chemometrics methods. Three chemometric methods including principal component regression (PCR), partial least squares (PLS) and orthogonal projections to latent structures (OPLS) were applied for the study. These methods showed similar results, but, OPLS was selected as preferred method due to its predictive and interpretational ability, facility, repeatability and low time-consuming. The results showed that α-pinene, 1,8 cineole, β-pinene and limonene are the highest contributors in antimicrobial properties of M. communis essential oil. Other researches have reported high antimicrobial activities for the plant essential oils rich in these compounds confirming our findings.
Ebrahimabadi, Ebrahim H; Ghoreishi, Sayed Mehdi; Masoum, Saeed; Ebrahimabadi, Abdolrasoul H
2016-01-01
Myrtus communis L. is an aromatic evergreen shrub and its essential oil possesses known powerful antimicrobial activity. However, the contribution of each component of the plant essential oil in observed antimicrobial ability is unclear. In this study, chemical components of the essential oil samples of the plant were identified qualitatively and quantitatively using GC/FID/Mass spectrometry system, antimicrobial activity of these samples against three microbial strains were evaluated and, these two set of data were correlated using chemometrics methods. Three chemometric methods including principal component regression (PCR), partial least squares (PLS) and orthogonal projections to latent structures (OPLS) were applied for the study. These methods showed similar results, but, OPLS was selected as preferred method due to its predictive and interpretational ability, facility, repeatability and low time-consuming. The results showed that α-pinene, 1,8 cineole, β-pinene and limonene are the highest contributors in antimicrobial properties of M. communis essential oil. Other researches have reported high antimicrobial activities for the plant essential oils rich in these compounds confirming our findings. PMID:26625337
Pérez, Rocío L; Escandar, Graciela M
2016-02-01
A green method is reported based on non-sophisticated instrumental for the quantification of seven natural and synthetic estrogens, three progestagens and one androgen in the presence of real interferences. The method takes advantage of: (1) chromatography, allowing total or partial resolution of a large number of compounds, (2) dual detection, permitting selection of the most appropriate signal for each analyte and, (3) second-order calibration, enabling mathematical resolution of incompletely resolved chromatographic bands and analyte determination in the presence of interferents. Consumption of organic solvents for cleaning, extraction and separation are markedly decreased because of the coupling with MCR-ALS (multivariate curve resolution/alternating least-squares) which allows the successful resolution in the presence of other co-eluting matrix constituents. Rigorous IUPAC detection limits were obtained: 6-24 ng L(-1) in water, and 0.1-0.9 ng g(-1) in sediments. Relative prediction errors were 2-10% (water) and 1-8% (sediments). PMID:26650083
Samadi, Naser; Masoum, Saeed; Mehrara, Bahare; Hosseini, Hossein
2015-09-15
Satureja hortensis L. and Oliveria decumbens Vent. are known for their diverse effects in drug therapy and traditional medicine. One of the most interesting properties of their essential oils is good antioxidant activity. In this paper, essential oils of aerial parts of S. hortensis L. and O. decumbens Vent. from different regions were obtained by hydrodistillation and were analyzed by gas chromatography-mass spectrometry (GC-MS). Essential oils were tested for their free radical scavenging activity using 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay to identify the peaks potentially responsible for the antioxidant activity from chromatographic fingerprints by numerous linear multivariate calibration techniques. Because of its simplicity and high repeatability, orthogonal projection to latent structures (OPLS) model had the best performance in indicating the potential antioxidant compounds in S. hortensis L. and O. decumbens Vent. essential oils. In this study, P-cymene, carvacrol and β-bisabolene for S. hortensis L. and P-cymene, Ç-terpinen, thymol, carvacrol, and 1,3-benzodioxole, 4-methoxy-6-(2-propenyl) for O. decumbens Vent. are suggested as the potentially antioxidant compounds. PMID:26262598
Samadi, Naser; Masoum, Saeed; Mehrara, Bahare; Hosseini, Hossein
2015-09-15
Satureja hortensis L. and Oliveria decumbens Vent. are known for their diverse effects in drug therapy and traditional medicine. One of the most interesting properties of their essential oils is good antioxidant activity. In this paper, essential oils of aerial parts of S. hortensis L. and O. decumbens Vent. from different regions were obtained by hydrodistillation and were analyzed by gas chromatography-mass spectrometry (GC-MS). Essential oils were tested for their free radical scavenging activity using 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay to identify the peaks potentially responsible for the antioxidant activity from chromatographic fingerprints by numerous linear multivariate calibration techniques. Because of its simplicity and high repeatability, orthogonal projection to latent structures (OPLS) model had the best performance in indicating the potential antioxidant compounds in S. hortensis L. and O. decumbens Vent. essential oils. In this study, P-cymene, carvacrol and β-bisabolene for S. hortensis L. and P-cymene, Ç-terpinen, thymol, carvacrol, and 1,3-benzodioxole, 4-methoxy-6-(2-propenyl) for O. decumbens Vent. are suggested as the potentially antioxidant compounds.
NASA Technical Reports Server (NTRS)
Prasad, C. B.; Prabhakaran, R.; Tompkins, S.
1987-01-01
The hole-drilling technique for the measurement of residual stresses using electrical resistance strain gages has been widely used for isotropic materials and has been adopted by the ASTM as a standard method. For thin isotropic plates, with a hole drilled through the thickness, the idealized hole-drilling calibration constants are obtained by making use of the well-known Kirsch's solution. In this paper, an analogous attempt is made to theoretically determine the three idealized hole-drilling calibration constants for thin orthotropic materials by employing Savin's (1961) complex stress function approach.
Bremen, Peter; Van der Willigen, Robert F; Van Wanrooij, Marc M; Schaling, David F; Martens, Marijn B; Van Grootel, Tom J; van Opstal, A John
2010-12-01
The double magnetic induction (DMI) method has successfully been used to record head-unrestrained gaze shifts in human subjects (Bremen et al., J Neurosci Methods 160:75-84, 2007a, J Neurophysiol, 98:3759-3769, 2007b). This method employs a small golden ring placed on the eye that, when positioned within oscillating magnetic fields, induces orientation-dependent voltages in a pickup coil in front of the eye. Here we develop and test a streamlined calibration routine for use with experimental animals, in particular, with monkeys. The calibration routine requires the animal solely to accurately follow visual targets presented at random locations in the visual field. Animals can readily learn this task. In addition, we use the fact that the pickup coil can be fixed rigidly and reproducibly on implants on the animal's skull. Therefore, accumulation of calibration data leads to increasing accuracy. As a first step, we simulated gaze shifts and the resulting DMI signals. Our simulations showed that the complex DMI signals can be effectively calibrated with the use of random target sequences, which elicit substantial decoupling of eye- and head orientations in a natural way. Subsequently, we tested our paradigm on three macaque monkeys. Our results show that the data for a successful calibration can be collected in a single recording session, in which the monkey makes about 1,500-2,000 goal-directed saccades. We obtained a resolution of 30 arc minutes (measurement range [-60,+60]°). This resolution compares to the fixation resolution of the monkey's oculomotor system, and to the standard scleral search-coil method.
Multivariate postprocessing techniques for probabilistic hydrological forecasting
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian
2016-04-01
Hydrologic ensemble forecasts driven by atmospheric ensemble prediction systems need statistical postprocessing in order to account for systematic errors in terms of both mean and spread. Runoff is an inherently multivariate process with typical events lasting from hours in case of floods to weeks or even months in case of droughts. This calls for multivariate postprocessing techniques that yield well calibrated forecasts in univariate terms and ensure a realistic temporal dependence structure at the same time. To this end, the univariate ensemble model output statistics (EMOS; Gneiting et al., 2005) postprocessing method is combined with two different copula approaches that ensure multivariate calibration throughout the entire forecast horizon. These approaches comprise ensemble copula coupling (ECC; Schefzik et al., 2013), which preserves the dependence structure of the raw ensemble, and a Gaussian copula approach (GCA; Pinson and Girard, 2012), which estimates the temporal correlations from training observations. Both methods are tested in a case study covering three subcatchments of the river Rhine that represent different sizes and hydrological regimes: the Upper Rhine up to the gauge Maxau, the river Moselle up to the gauge Trier, and the river Lahn up to the gauge Kalkofen. The results indicate that both ECC and GCA are suitable for modelling the temporal dependences of probabilistic hydrologic forecasts (Hemri et al., 2015). References Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman (2005), Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133(5), 1098-1118, DOI: 10.1175/MWR2904.1. Hemri, S., D. Lisniak, and B. Klein, Multivariate postprocessing techniques for probabilistic hydrological forecasting, Water Resources Research, 51(9), 7436-7451, DOI: 10.1002/2014WR016473. Pinson, P., and R. Girard (2012), Evaluating the quality of scenarios of short-term wind power
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. PMID:27496995
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid.
Li, D.Y.; Chen, L.Q.
1998-01-05
Coherent precipitation of multi-variant Ti{sub 11}Ni{sup 14} precipitates in TiNi alloys was investigated by employing a continuum field kinetic model. The structural difference between the precipitate phase and the matrix as well as the orientational differences between precipitate variants are distinguished by nonconserved structural field variables, whereas the compositional difference between the precipitate and matrix is described by a conserved field variable. The temporal evolution of the spatially dependent field variables is determined by numerically solving the time-dependent Ginzburg-Landau (TDGL) equations for the structural variables and the Cahn-Hilliard diffusion equation for the composition. In particular, the interaction between precipitates, and the growth morphology of Ti{sub 11}Ni{sub 14} precipitates under strain-constraints were studied, without a priori assumptions on the precipitate shape and distribution. The predicted morphology and distribution of Ti{sub 11}Ni{sub 14} variants were compared with experimental observations. Excellent agreement between the simulation and experimental observations was found.
NASA Astrophysics Data System (ADS)
Cerqueira, J. G.; Fernandez, J. H.; Hoelzemann, J. J.; Leme, N. M. P.; Sousa, C. T.
2014-10-01
Due to the high costs of commercial monitoring instruments, a portable sun photometer was developed at INPE/CRN laboratories, operating in four bands, with two bands in the visible spectrum and two in near infrared. The instrument calibration process is performed by applying the classical Langley method. Application of the Langley’s methodology requires a site with high optical stability during the measurements, which is usually found in high altitudes. However, far from being an ideal site, Harrison et al. (1994) report success with applying the Langley method to some data for a site in Boulder, Colorado. Recently, Liu et al. (2011) show that low elevation sites, far away from urban and industrial centers can provide a stable optical depth, similar to high altitudes. In this study we investigated the feasibility of applying the methodology in the semiarid region of northeastern Brazil, far away from pollution areas with low altitudes, for sun photometer calibration. We investigated optical depth stability using two periods of measurements in the year during dry season in austral summer. The first one was in December when the native vegetation naturally dries, losing all its leaves and the second one was in September in the middle of the dry season when the vegetation is still with leaves. The data were distributed during four days in December 2012 and four days in September 2013 totaling eleven half days of collections between mornings and afternoons and by means of fitted line to the data V0 values were found. Despite the high correlation between the collected data and the fitted line, the study showed a variation between the values of V0 greater than allowed for sun photometer calibration. The lowest V0 variation reached in this experiment with values lower than 3% for the bands 500, 670 and 870 nm are displayed in tables. The results indicate that the site needs to be better characterized with studies in more favorable periods, soon after the rainy season.
Callén, M S; López, J M; Mastral, A M
2010-08-15
The estimation of benzo(a)pyrene (BaP) concentrations in ambient air is very important from an environmental point of view especially with the introduction of the Directive 2004/107/EC and due to the carcinogenic character of this pollutant. A sampling campaign of particulate matter less or equal than 10 microns (PM10) carried out during 2008-2009 in four locations of Spain was collected to determine experimentally BaP concentrations by gas chromatography mass-spectrometry mass-spectrometry (GC-MS-MS). Multivariate linear regression models (MLRM) were used to predict BaP air concentrations in two sampling places, taking PM10 and meteorological variables as possible predictors. The model obtained with data from two sampling sites (all sites model) (R(2)=0.817, PRESS/SSY=0.183) included the significant variables like PM10, temperature, solar radiation and wind speed and was internally and externally validated. The first validation was performed by cross validation and the last one by BaP concentrations from previous campaigns carried out in Zaragoza from 2001-2004. The proposed model constitutes a first approximation to estimate BaP concentrations in urban atmospheres with very good internal prediction (Q(CV)(2)=0.813, PRESS/SSY=0.187) and with the maximal external prediction for the 2001-2002 campaign (Q(ext)(2)=0.679 and PRESS/SSY=0.321) versus the 2001-2004 campaign (Q(ext)(2)=0.551, PRESS/SSY=0.449).
Kinoshita, Naoki; Kita, Akinobu; Takemura, Akihiro; Nishimoto, Yasuhiro; Adachi, Toshiki
2014-09-01
The uncertainty of the beam quality conversion factor (k(Q,Q0)) of standard dosimetry of absorbed dose to water in external beam radiotherapy 12 (JSMP12) is determined by combining the uncertainty of each beam quality conversion factor calculated for each type of ionization chamber. However, there is no guarantee that ionization chambers of the same type have the same structure and thickness, so there may be individual variations. We evaluated the uncertainty of k(Q,Q0) for JSMP12 using an ionization chamber dosimeter and linear accelerator without a specific device or technique in consideration of the individual variation of ionization chambers and in clinical radiation field. The cross calibration formula was modified and the beam quality conversion factor for the experimental values [(k(Q,Q0))field] determined using the modified formula. It's uncertainty was calculated to be 1.9%. The differences between (k(Q,Q0))field of experimental values and k(Q,Q0) for Japan Society of Medical Physics 12 (JSMP12) were 0.73% and 0.88% for 6- and 10-MV photon beams, respectively, remaining within ± 1.9%. This showed k(Q,Q0) for JSMP12 to be consistent with (k(Q,Q0))field of experimental values within the estimated uncertainty range. Although inter-individual differences may be generated, even when the same type of ionized chamber is used, k(Q,Q0) for JSMP12 appears to be consistent within the estimated uncertainty range of (k(Q,Q0)field.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
NASA Technical Reports Server (NTRS)
Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)
2000-01-01
Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements
Implicit Spacecraft Gyro Calibration
NASA Technical Reports Server (NTRS)
Harman, Richard; Bar-Itzhack, Itzhack Y.
2003-01-01
This paper presents an implicit algorithm for spacecraft onboard instrument calibration, particularly to onboard gyro calibration. This work is an extension of previous work that was done where an explicit gyro calibration algorithm was applied to the AQUA spacecraft gyros. The algorithm presented in this paper was tested using simulated data and real data that were downloaded from the Microwave Anisotropy Probe (MAP) spacecraft. The calibration tests gave very good results. A comparison between the use of the implicit calibration algorithm used here with the explicit algorithm used for AQUA spacecraft indicates that both provide an excellent estimation of the gyro calibration parameters with similar accuracies.
NASA Astrophysics Data System (ADS)
Enßlin, Torsten A.; Junklewitz, Henrik; Winderling, Lars; Greiner, Maksim; Selig, Marco
2014-10-01
Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
NASA Astrophysics Data System (ADS)
Hoffman, Ross N.; Ardizzone, Joseph V.; Leidner, S. Mark; Smith, Deborah K.; Atlas, Robert M.
2013-04-01
The cross-calibrated, multi-platform (CCMP) ocean surface wind project [Atlas et al., 2011] generates high-quality, high-resolution, vector winds over the world's oceans beginning with the 1987 launch of the SSM/I F08, using Remote Sensing Systems (RSS) microwave satellite wind retrievals, as well as in situ observations from ships and buoys. The variational analysis method [VAM, Hoffman et al., 2003] is at the center of the CCMP project's analysis procedures for combining observations of the wind. The VAM was developed as a smoothing spline and so implicitly defines the background error covariance by means of several constraints with adjustable weights, and does not provide an explicit estimate of the analysis error. Here we report on our research to develop uncertainty estimates for wind speed for the VAM inputs and outputs, i.e., for the background (B), the observations (O) and the analysis (A) wind speed, based on the Desroziers et al. [2005] diagnostics (DD hereafter). The DD are applied to the CCMP ocean surface wind data sets to estimate wind speed errors of the ECMWF background, the microwave satellite observations and the resulting CCMP analysis. The DD confirm that the ECMWF operational surface wind speed error standard deviations vary with latitude in the range 0.7-1.5 m/s and that the cross-calibrated Remote Sensing Systems (RSS) wind speed retrievals standard deviations are in the range 0.5-0.8 m/s. Further the estimated CCMP analysis wind speed standard deviations are in the range 0.2-0.4 m/s. The results suggests the need to revise the parameterization of the errors due to the FGAT (first guess at the appropriate time) procedure. Errors for wind speeds < 16 m/s are homogeneous, but for the relatively rare, but critical higher wind speed situations, errors are much larger. Atlas, R., R. N. Hoffman, J. Ardizzone, S. M. Leidner, J. C. Jusem, D. K. Smith, and D. Gombos, A cross-calibrated, multi-platform ocean surface wind velocity product for
NASA Astrophysics Data System (ADS)
Divya, O.; Shinde, Mandakini
2013-07-01
A multivariate calibration model for the simultaneous estimation of propranolol (PRO) and amiloride (AMI) using synchronous fluorescence spectroscopic data has been presented in this paper. Two multivariate techniques, PCR (Principal Component Regression) and PLSR (Partial Least Square Regression), have been successfully applied for the simultaneous determination of AMI and PRO in synthetic binary mixtures and pharmaceutical dosage forms. The SF spectra of AMI and PRO (calibration mixtures) were recorded at several concentrations within their linear range between wavelengths of 310 and 500 nm at an interval of 1 nm. Calibration models were constructed using 32 samples and validated by varying the concentrations of AMI and PRO in the calibration range. The results indicated that the model developed was very robust and able to efficiently analyze the mixtures with low RMSEP values.
MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES
Dunson, David B.
2013-01-01
Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563
Multivariate meta-analysis: potential and promise.
Jackson, Dan; Riley, Richard; White, Ian R
2011-09-10
The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta-analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day 'Multivariate meta-analysis' event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta-analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta-analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta-analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice.
Bioelectronic tongue and multivariate analysis: a next step in BOD measurements.
Raud, Merlin; Kikas, Timo
2013-05-01
Seven biosensors based on different semi-specific and universal microorganisms were constructed for biochemical oxygen demand (BOD) measurements in various synthetic industrial wastewaters. All biosensors were calibrated using OECD synthetic wastewater and the resulting calibration curves were used in the calculations of the sensor-BOD values for all biosensors. In addition, the output signals of all biosensors were analyzed as a bioelectronic tongue and comprehensive multivariate data analysis was applied to extract qualitative and quantitative information from the samples. In the case of individual biosensor measurements, most accurate result was gained when semi-specific biosensor was applied to analyze sample specific to that biosensor. Universal biosensors or biosensors semi-specific to other samples underestimated the BOD7 of the sample 10-25%. PLS regression method was used for the multivariate calibration of the biosensor array. The calculated sensor-BOD values differed from BOD7 less than 5.6% in all types of samples. By applying PCA and using three first principal components, giving 99.66% of variation, it was possible to differentiate samples by their compositions.
Problems with Multivariate Normality: Can the Multivariate Bootstrap Help?
ERIC Educational Resources Information Center
Thompson, Bruce
Multivariate normality is required for some statistical tests. This paper explores the implications of violating the assumption of multivariate normality and illustrates a graphical procedure for evaluating multivariate normality. The logic for using the multivariate bootstrap is presented. The multivariate bootstrap can be used when distribution…
A multivariate CAR model for mismatched lattices.
Porter, Aaron T; Oleson, Jacob J
2014-10-01
In this paper, we develop a multivariate Gaussian conditional autoregressive model for use on mismatched lattices. Most current multivariate CAR models are designed for each multivariate outcome to utilize the same lattice structure. In many applications, a change of basis will allow different lattices to be utilized, but this is not always the case, because a change of basis is not always desirable or even possible. Our multivariate CAR model allows each outcome to have a different neighborhood structure which can utilize different lattices for each structure. The model is applied in two real data analysis. The first is a Bayesian learning example in mapping the 2006 Iowa Mumps epidemic, which demonstrates the importance of utilizing multiple channels of infection flow in mapping infectious diseases. The second is a multivariate analysis of poverty levels and educational attainment in the American Community Survey. PMID:25457598
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Multivariate residues and maximal unitarity
NASA Astrophysics Data System (ADS)
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Calibration of Germanium Resistance Thermometers
NASA Technical Reports Server (NTRS)
Ladner, D.; Urban, E.; Mason, F. C.
1987-01-01
Largely completed thermometer-calibration cryostat and probe allows six germanium resistance thermometers to be calibrated at one time at superfluid-helium temperatures. In experiments involving several such thermometers, use of this calibration apparatus results in substantial cost savings. Cryostat maintains temperature less than 2.17 K through controlled evaporation and removal of liquid helium from Dewar. Probe holds thermometers to be calibrated and applies small amount of heat as needed to maintain precise temperature below 2.17 K.
Multivariate bubbles and antibubbles
NASA Astrophysics Data System (ADS)
Fry, John
2014-08-01
In this paper we develop models for multivariate financial bubbles and antibubbles based on statistical physics. In particular, we extend a rich set of univariate models to higher dimensions. Changes in market regime can be explicitly shown to represent a phase transition from random to deterministic behaviour in prices. Moreover, our multivariate models are able to capture some of the contagious effects that occur during such episodes. We are able to show that declining lending quality helped fuel a bubble in the US stock market prior to 2008. Further, our approach offers interesting insights into the spatial development of UK house prices.
Multivariate Data EXplorer (MDX)
Steed, Chad Allen
2012-08-01
The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views whereby selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Olivieri, Alejandro C
2015-04-01
Practical guidelines for reporting analytical calibration results are provided. General topics, such as the number of reported significant figures and the optimization of analytical procedures, affect all calibration scenarios. In the specific case of single-component or univariate calibration, relevant issues discussed in the present Tutorial include: (1) how linearity can be assessed, (2) how to correctly estimate the limits of detection and quantitation, (2) when and how standard addition should be employed, (3) how to apply recovery studies for evaluating accuracy and precision, and (4) how average prediction errors can be compared for different analytical methodologies. For multi-component calibration procedures based on multivariate data, pertinent subjects here included are the choice of algorithms, the estimation of analytical figures of merit (detection capabilities, sensitivity, selectivity), the use of non-linear models, the consideration of the model regression coefficients for variable selection, and the application of certain mathematical pre-processing procedures such as smoothing.
NASA Astrophysics Data System (ADS)
Wan, Boyong
2007-12-01
Airborne passive Fourier transform infrared spectrometry is gaining increased attention in environmental applications because of its great flexibility. Usually, pattern recognition techniques are used for automatic analysis of large amount of collected data. However, challenging problems are the constantly changing background and high calibration cost. As aircraft is flying, background is always changing. Also, considering the great variety of backgrounds and high expense of data collection from aircraft, cost of collecting representative training data is formidable. Instead of using airborne data, data generated from simulation strategies can be used for training purposes. Training data collected under controlled conditions on the ground or synthesized from real backgrounds can be both options. With both strategies, classifiers may be developed with much lower cost. For both strategies, signal processing techniques need to be used to extract analyte features. In this dissertation, signal processing methods are applied either in interferogram or spectral domain for features extraction. Then, pattern recognition methods are applied to develop binary classifiers for automated detection of air-collected methanol and ethanol vapors. The results demonstrate, with optimized signal processing methods and training set composition, classifiers trained from ground-collected or synthetic data can give good classification on real air-collected data. Near-infrared (NIR) spectrometry is emerging as a promising tool for noninvasive blood glucose detection. In combination with multivariate calibration techniques, NIR spectroscopy can give quick quantitative determinations of many species with minimal sample preparation. However, one main problem with NIR calibrations is degradation of calibration model over time. The varying background information will worsen the prediction precision and complicate the multivariate models. To mitigate the needs for frequent recalibration and
Multivariate Analysis in Metabolomics
Worley, Bradley; Powers, Robert
2015-01-01
Metabolomics aims to provide a global snapshot of all small-molecule metabolites in cells and biological fluids, free of observational biases inherent to more focused studies of metabolism. However, the staggeringly high information content of such global analyses introduces a challenge of its own; efficiently forming biologically relevant conclusions from any given metabolomics dataset indeed requires specialized forms of data analysis. One approach to finding meaning in metabolomics datasets involves multivariate analysis (MVA) methods such as principal component analysis (PCA) and partial least squares projection to latent structures (PLS), where spectral features contributing most to variation or separation are identified for further analysis. However, as with any mathematical treatment, these methods are not a panacea; this review discusses the use of multivariate analysis for metabolomics, as well as common pitfalls and misconceptions. PMID:26078916
Multivariate Data EXplorer (MDX)
2012-08-01
The MDX toolkit facilitates exploratory data analysis and visualization of multivariate datasets. MDX provides and interactive graphical user interface to load, explore, and modify multivariate datasets stored in tabular forms. MDX uses an extended version of the parallel coordinates plot and scatterplots to represent the data. The user can perform rapid visual queries using mouse gestures in the visualization panels to select rows or columns of interest. The visualization panel provides coordinated multiple views wherebymore » selections made in one plot are propagated to the other plots. Users can also export selected data or reconfigure the visualization panel to explore relationships between columns and rows in the data.« less
Herman, J.R.; Hudson, R.; McPeters, R.; Stolarski, R. ); Ahmad, Z.; Gu, X.Y., Taylor, S.; Wellemeyer, C. )
1991-04-20
The currently archived (1989) total ozone mapping spectrometer (TOMS) and solar backscattered ultraviolet (SBUV) total ozone data (version 5) show a global average decrease of about 9.0% from November 1978 to November 1988. This large decrease disagrees with an approximate 3.5% decrease estimated from the ground-based Dobson network. The primary source of disagreement was found to arise from an overestimate of reflectivity change and its incorrect wavelengths dependence for the diffuser plate used when measuring solar irradiance. For total ozone measured by TOMS, a means has been found to use the measured radiance-irradiance ratio from several wavelengths pairs to construct an internally self consistent calibration. The method uses the wavelength dependence of the sensitivity to calibration errors and the requirement that albedo ratios for each wavelength pair yield the same total ozone amounts. Smaller errors in determining spacecraft attitude, synchronization problems with the photon counting electronics, and sea glint contamination of boundary reflectivity data have been corrected or minimized. New climatological low-ozone profiles have been incorporated into the TOMS algorithm that are appropriate for Antarctic ozone hole conditions and other low ozone cases. The combined corrections have led to a new determination of the global average total ozone trend (version 6) as a 2.9 {plus minus} 1.3% decrease over 11 years. Version 6 data are shown to be in agreement within error limits with the average of 39 ground-based Dobson stations and with the world standard Dobson spectrometer 83 at Mauna Loa, Hawaii.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Introduction to multivariate discrimination
NASA Astrophysics Data System (ADS)
Kégl, Balázs
2013-07-01
Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either
Multivariate respiratory motion prediction
NASA Astrophysics Data System (ADS)
Dürichen, R.; Wissel, T.; Ernst, F.; Schlaefer, A.; Schweikard, A.
2014-10-01
In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation.
Method of multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2004-01-06
A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).
Mossavar-Rahmani, Yasmin; Shaw, Pamela A; Wong, William W; Sotres-Alvarez, Daniela; Gellman, Marc D; Van Horn, Linda; Stoutenberg, Mark; Daviglus, Martha L; Wylie-Rosett, Judith; Siega-Riz, Anna Maria; Ou, Fang-Shu; Prentice, Ross L
2015-06-15
We investigated measurement error in the self-reported diets of US Hispanics/Latinos, who are prone to obesity and related comorbidities, by background (Central American, Cuban, Dominican, Mexican, Puerto Rican, and South American) in 2010-2012. In 477 participants aged 18-74 years, doubly labeled water and urinary nitrogen were used as objective recovery biomarkers of energy and protein intakes. Self-report was captured from two 24-hour dietary recalls. All measures were repeated in a subsample of 98 individuals. We examined the bias of dietary recalls and their associations with participant characteristics using generalized estimating equations. Energy intake was underestimated by 25.3% (men, 21.8%; women, 27.3%), and protein intake was underestimated by 18.5% (men, 14.7%; women, 20.7%). Protein density was overestimated by 10.7% (men, 11.3%; women, 10.1%). Higher body mass index and Hispanic/Latino background were associated with underestimation of energy (P<0.05). For protein intake, higher body mass index, older age, nonsmoking, Spanish speaking, and Hispanic/Latino background were associated with underestimation (P<0.05). Systematic underreporting of energy and protein intakes and overreporting of protein density were found to vary significantly by Hispanic/Latino background. We developed calibration equations that correct for subject-specific error in reporting that can be used to reduce bias in diet-disease association studies.
NASA Technical Reports Server (NTRS)
Lessard, Wendy B.
1999-01-01
The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).
NASA Technical Reports Server (NTRS)
Bate, T.; Calkins, D. E.; Price, P.; Veikins, O.
1971-01-01
Calibrator generates accurate flow velocities over wide range of gas pressure, temperature, and composition. Both pressure and flow velocity can be maintained within 0.25 percent. Instrument is essentially closed loop hydraulic system containing positive displacement drive.
Relationship between Multiple Regression and Selected Multivariable Methods.
ERIC Educational Resources Information Center
Schumacker, Randall E.
The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…
NASA Astrophysics Data System (ADS)
Manz, B.; Buytaert, W.; Tobón, C.; Villacis, M.; García, F.
2014-12-01
With the imminent release of GPM it is essential for the hydrological user community to improve the spatial resolution of satellite precipitation products (SPPs), also retrospectively of historical time-series. Despite the growing number of applications, to date SPPs have two major weaknesses. Firstly, geosynchronous infrared (IR) SPPs, relying exclusively on cloud elevation/ IR temperature, fail to replicate ground rainfall rates especially for convective rainfall. Secondly, composite SPPs like TRMM include microwave and active radar to overcome this, but the coarse spatial resolution (0.25°) from infrequent orbital sampling often fails to: a) characterize precipitation patterns (especially extremes) in complex topography regions, and b) allow for gauge comparisons with adequate spatial support. This is problematic for satellite-gauge merging and subsequent hydrological modelling applications. We therefore present a new re-calibration and downscaling routine that is applicable to 0.25°/ 3-hrly TRMM 3B42 and Level 3 GPM time-series to generate 1 km estimates. 16 years of instantaneous TRMM radar (TPR) images were evaluated against a unique dataset of over 100 10-min rain gauges from the tropical Andes (Colombia & Ecuador) to develop a spatially distributed error surface. Long-term statistics on occurrence frequency, convective/ stratiform fraction and extreme precipitation probability (Gamma & Generalized Pareto distributions) were computed from TPR at the 1 km scale as well as from TPR and 3B42 at the 0.25° scale. To downscale from 0.25° to 1 km a stochastic generator was used to restrict precipitation occurrence to a fraction of the 1 km pixels within the 0.25° gridcell at every time-step. Regression modelling established a relationship between probability distributions at the 0.25° scale and rainfall amounts were assigned to the retained 1 km pixels by quantile-matching to the gridcell. The approach inherently provides mass conservation of the downscaled
Multivariate Hypergeometric Similarity Measure
Kaddi, Chanchala D.; Parry, R. Mitchell; Wang, May D.
2016-01-01
We propose a similarity measure based on the multivariate hypergeometric distribution for the pairwise comparison of images and data vectors. The formulation and performance of the proposed measure are compared with other similarity measures using synthetic data. A method of piecewise approximation is also implemented to facilitate application of the proposed measure to large samples. Example applications of the proposed similarity measure are presented using mass spectrometry imaging data and gene expression microarray data. Results from synthetic and biological data indicate that the proposed measure is capable of providing meaningful discrimination between samples, and that it can be a useful tool for identifying potentially related samples in large-scale biological data sets. PMID:24407308
NASA Technical Reports Server (NTRS)
Peay, Christopher S.; Palacios, David M.
2011-01-01
Calibrate_Image calibrates images obtained from focal plane arrays so that the output image more accurately represents the observed scene. The function takes as input a degraded image along with a flat field image and a dark frame image produced by the focal plane array and outputs a corrected image. The three most prominent sources of image degradation are corrected for: dark current accumulation, gain non-uniformity across the focal plane array, and hot and/or dead pixels in the array. In the corrected output image the dark current is subtracted, the gain variation is equalized, and values for hot and dead pixels are estimated, using bicubic interpolation techniques.
NASA Technical Reports Server (NTRS)
1983-01-01
Flow Technology Inc. worked with Lewis Research Center to develop a system for monitoring two different propellants being supplied to a spacecraft rocket thruster. They then commercialized the technology in the Microtrack, an extremely precise low-flow calibration system. Moog Inc., one of the device's primary users, measures the flow rate or the speed at which hydraulic oil flows through pin sized holes in disc shaped sapphires with the Microtrack. Using this data, two orifices with exactly the same flow rate can be matched as a pair and used as masters in servovalve production. The microtrack can also be used to calibrate other equipment.
Tarachiwin, Lucksanaporn; Masako, Osawa; Fukusaki, Eiichiro
2008-07-23
(1)H NMR spectrometry in combination with multivariate analysis was considered to provide greater information on quality assessment over an ordinary sensory testing method due to its high reliability and high accuracy. The sensory quality evaluation of watermelon (Citrullus lanatus (Thunb.) Matsum. & Nakai) was carried out by means of (1)H NMR-based metabolomics. Multivariate analyses by partial least-squares projections to latent structures-discrimination analysis (PLS-DA) and PLS-regression offered extensive information for quality differentiation and quality evaluation, respectively. The impact of watermelon and rootstock cultivars on the sensory qualities of watermelon was determined on the basis of (1)H NMR metabolic fingerprinting and profiling. The significant metabolites contributing to the discrimination were also identified. A multivariate calibration model was successfully constructed by PLS-regression with extremely high reliability and accuracy. Thus, (1)H NMR-based metabolomics with multivariate analysis was considered to be one of the most suitable complementary techniques that could be applied to assess and predict the sensory quality of watermelons and other horticultural plants.
Analytical advantages of multivariate data processing. One, two, three, infinity?
Olivieri, Alejandro C
2008-08-01
Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio
2011-08-01
The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data.
NASA Astrophysics Data System (ADS)
Libera, D.; Arumugam, S.
2015-12-01
Water quality observations are usually not available on a continuous basis because of the expensive cost and labor requirements so calibrating and validating a mechanistic model is often difficult. Further, any model predictions inherently have bias (i.e., under/over estimation) and require techniques that preserve the long-term mean monthly attributes. This study suggests and compares two multivariate bias-correction techniques to improve the performance of the SWAT model in predicting daily streamflow, TN Loads across the southeast based on split-sample validation. The first approach is a dimension reduction technique, canonical correlation analysis that regresses the observed multivariate attributes with the SWAT model simulated values. The second approach is from signal processing, importance weighting, that applies a weight based off the ratio of the observed and model densities to the model data to shift the mean, variance, and cross-correlation towards the observed values. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are also compared with independent estimates from the USGS LOADEST model. Uncertainties in the bias-corrected estimates due to limited water quality observations are also discussed.
NASA Astrophysics Data System (ADS)
Zaconte, V.; Altea Team
The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to
Giacomo, Della Riccia; Stefania, Del Zotto
2013-12-15
Fumonisins are mycotoxins produced by Fusarium species that commonly live in maize. Whereas fungi damage plants, fumonisins cause disease both to cattle breedings and human beings. Law limits set fumonisins tolerable daily intake with respect to several maize based feed and food. Chemical techniques assure the most reliable and accurate measurements, but they are expensive and time consuming. A method based on Near Infrared spectroscopy and multivariate statistical regression is described as a simpler, cheaper and faster alternative. We apply Partial Least Squares with full cross validation. Two models are described, having high correlation of calibration (0.995, 0.998) and of validation (0.908, 0.909), respectively. Description of observed phenomenon is accurate and overfitting is avoided. Screening of contaminated maize with respect to European legal limit of 4 mg kg(-1) should be assured.
Local hadron calibration with ATLAS
NASA Astrophysics Data System (ADS)
Giovannini, Paola; ATLAS Liquid Argon Calorimeter Group
2011-04-01
The method of Local Hadron Calibration is used in ATLAS as one of the two major calibration schemes for the reconstruction of jets and missing transverse energy. The method starts from noise suppressed clusters and corrects them for non-compensation effects and for losses due to noise threshold and dead material. Jets are reconstructed using the calibrated clusters and are then corrected for out of cone effects. The performance of the corrections applied to the calorimeter clusters is tested with detailed GEANT4 information. Results obtained with this procedure are discussed both for single pion simulations and for di-jet simulations. The calibration scheme is validated on data, by comparing the calibrated cluster energy in data with Mote Carlo simulations. Preliminary results obtained with GeV collision data are presented. The agreement between data and Monte Carlo is within 5% for the final cluster scale.
Metwally, Fadia H
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12microgml(-1) of NIF and 2-8microgml(-1) of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method. PMID:17631041
Metwally, Fadia H
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12microgml(-1) of NIF and 2-8microgml(-1) of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.
NASA Astrophysics Data System (ADS)
Metwally, Fadia H.
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.
Pattern recognition used to investigate multivariate data in analytical chemistry
Jurs, P.C.
1986-06-06
Pattern recognition and allied multivariate methods provide an approach to the interpretation of the multivariate data often encountered in analytical chemistry. Widely used methods include mapping and display, discriminant development, clustering, and modeling. Each has been applied to a variety of chemical problems, and examples are given. The results of two recent studies are shown, a classification of subjects as normal or cystic fibrosis heterozygotes and simulation of chemical shifts of carbon-13 nuclear magnetic resonance spectra by linear model equations.
NASA Astrophysics Data System (ADS)
Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo
2015-12-01
The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.
TIME CALIBRATED OSCILLOSCOPE SWEEP
Owren, H.M.; Johnson, B.M.; Smith, V.L.
1958-04-22
The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.
Banquet-Terán, Julio; Johnson-Restrepo, Boris; Hernández-Morelo, Alveiro; Ropero, Jorge; Fontalvo-Gomez, Miriam; Romañach, Rodolfo J
2016-07-01
A nondestructive and faster methodology to quantify mechanical properties of polypropylene (PP) pellets, obtained from an industrial plant, was developed with Raman spectroscopy. Raman spectra data were obtained from several types of samples such as homopolymer PP, random ethylene-propylene copolymer, and impact ethylene-propylene copolymer. Multivariate calibration models were developed by relating the changes in the Raman spectra to mechanical properties determined by ASTM tests (Young's traction modulus, tensile strength at yield, elongation at yield on traction, and flexural modulus at 1% secant). Several strategies were evaluated to build robust models including the use of preprocessing methods (baseline correction, vector normalization, de-trending, and standard normal variate), selecting the best subset of wavelengths to model property response and discarding irrelevant variables by applying genetic algorithm (GA). Linear multivariable models were investigated such as partial least square regression (PLS) and PLS with genetic algorithm (GA-PLS) while nonlinear models were implemented with artificial neural network (ANN) preceded by GA (GA-ANN). The best multivariate calibration models were obtained when a combination of genetic algorithms and artificial neural network were used on Raman spectral data with relative standard errors (%RSE) from 0.17 to 0.41 for training and 0.42 to 0.88% validation data sets. PMID:27287847
Regional dissociated heterochrony in multivariate analysis.
Mitteroecker, P; Gunz, P; Weber, G W; Bookstein, F L
2004-12-01
Heterochrony, the classic framework to study ontogeny and phylogeny, in essence relies on a univariate concept of shape. Though principal component plots of multivariate shape data seem to resemble classical bivariate allometric plots, the language of heterochrony cannot be translated directly into general multivariate methodology. We simulate idealized multivariate ontogenetic trajectories and demonstrate their behavior in principal component plots in shape space and in size-shape space. The concept of "dissociation", which is conventionally regarded as a change in the relationship between shape change and size change, appears to be algebraically the same as regional dissociation - the variation of apparent heterochrony by region. Only if the trajectories of two related species lie along exactly the same path in shape space can the classic terminology of heterochrony apply so that pure dissociation of size change against shape change can be detected. We demonstrate a geometric morphometric approach to these issues using adult and subadult crania of 48 Pan paniscus and 47 P. troglodytes. On each specimen we digitized 47 landmarks and 144 semilandmarks on ridge curves and the external neurocranial surface. The relation between these two species' growth trajectories is too complex for a simple summary in terms of global heterochrony.
Assessing causality in multivariate accident models.
Elvik, Rune
2011-01-01
This paper discusses the application of operational criteria of causality to multivariate statistical models developed to identify sources of systematic variation in accident counts, in particular the effects of variables representing safety treatments. Nine criteria of causality serving as the basis for the discussion have been developed. The criteria resemble criteria that have been widely used in epidemiology. To assess whether the coefficients estimated in a multivariate accident prediction model represent causal relationships or are non-causal statistical associations, all criteria of causality are relevant, but the most important criterion is how well a model controls for potentially confounding factors. Examples are given to show how the criteria of causality can be applied to multivariate accident prediction models in order to assess the relationships included in these models. It will often be the case that some of the relationships included in a model can reasonably be treated as causal, whereas for others such an interpretation is less supported. The criteria of causality are indicative only and cannot provide a basis for stringent logical proof of causality.
Extracting the MESA SR4000 calibrations
NASA Astrophysics Data System (ADS)
Charleston, Sean A.; Dorrington, Adrian A.; Streeter, Lee; Cree, Michael J.
2015-05-01
Time-of-flight range imaging cameras are capable of acquiring depth images of a scene. Some algorithms require these cameras to be run in `raw mode', where any calibrations from the off-the-shelf manufacturers are lost. The calibration of the MESA SR4000 is herein investigated, with an attempt to reconstruct the full calibration. Possession of the factory calibration enables calibrated data to be acquired and manipulated even in "raw mode." This work is motivated by the problem of motion correction, in which the calibration must be separated into component parts to be applied at different stages in the algorithm. There are also other applications, in which multiple frequencies are required, such as multipath interference correction. The other frequencies can be calibrated in a similar way, using the factory calibration as a base. A novel technique for capturing the calibration data is described; a retro-reflector is used on a moving platform, which acts as a point source at a distance, resulting in planar waves on the sensor. A number of calibrations are retrieved from the camera, and are then modelled and compared to the factory calibration. When comparing the factory calibration to both the "raw mode" data, and the calibration described herein, a root mean squared error improvement of 51:3mm was seen, with a standard deviation improvement of 34:9mm.
NASA Astrophysics Data System (ADS)
Ghasemi, Jahan B.; Zolfonoun, Ehsan
2013-11-01
A new multicomponent analysis method, based on principal component analysis-multivariate adaptive regression splines (PC-MARS) is proposed for the determination of dialkyltin compounds. In Tween-20 micellar media, dimethyl and dibutyltin react with morin to give fluorescent complexes with the maximum emission peaks at 527 and 520 nm, respectively. The spectrofluorimetric matrix data, before building the MARS models, were subjected to principal component analysis and decomposed to PC scores as starting points for the MARS algorithm. The algorithm classifies the calibration data into several groups, in each a regression line or hyperplane is fitted. Performances of the proposed methods were tested in term of root mean square errors of prediction (RMSEP), using synthetic solutions. The results show the strong potential of PC-MARS, as a multivariate calibration method, to be applied to spectral data for multicomponent determinations. The effect of different experimental parameters on the performance of the method were studied and discussed. The prediction capability of the proposed method compared with GC-MS method for determination of dimethyltin and/or dibutyltin.
Muon Energy Calibration of the MINOS Detectors
Miyagawa, Paul S.
2004-09-01
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by {approx} 10%, which is equivalent to increasing the amount of data by 20%.
NASA Astrophysics Data System (ADS)
Guiu, E.; Rodrigues, M.; Touboul, P.; Pradels, G.
The MICROSCOPE mission is planned for launch in early 2009. It aims to verify the Equivalence Principle to an accuracy of 10-15, which is currently difficult to obtain on Earth because of the intrinsic limitations of the torsion pendulum and disturbing phenomena, like seismic activity. In space the experiment can take advantage of the quiet environment provided by a drag-free satellite. The instrument used for the test is a differential electrostatic accelerometer composed of two inertial sensors with test-masses made of different materials: one in Platinum Rhodium alloy, the other in Titanium alloy. The space experiment will also benefit from a second differential accelerometer with both test-masses of the same material, which will be used as a reference instrument to characterise the disturbing signals and sensitivities. The in-orbit calibration of the instrument is mandatory to validate the space test and several procedures have been previously proposed, taking advantage of the satellite propulsion system or the a priori knowledge of natural in-orbit applied accelerations. Due to the actual configuration of the MICROSCOPE propulsion system, the possibility of accurate satellite manoeuvres is limited but sufficient. This paper presents the necessary compromise between the knowledge of satellite and instrument parameters and the calibration procedures. The scenario of the MICROSCOPE in-orbit calibration phase is finely defined in agreement with the required performances for the EP test accuracy.
Multivariate Model of Infant Competence.
ERIC Educational Resources Information Center
Kierscht, Marcia Selland; Vietze, Peter M.
This paper describes a multivariate model of early infant competence formulated from variables representing infant-environment transaction including: birthweight, habituation index, personality ratings of infant social orientation and task orientation, ratings of maternal responsiveness to infant distress and social signals, and observational…
Parameter Sensitivity in Multivariate Methods
ERIC Educational Resources Information Center
Green, Bert F., Jr.
1977-01-01
Interpretation of multivariate models requires knowing how much the fit of the model is impaired by changes in the parameters. The relation of parameter change to loss of goodness of fit can be called parameter sensitivity. Formulas are presented for assessing the sensitivity of multiple regression and principal component weights. (Author/JKS)
NASA Astrophysics Data System (ADS)
Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.
2014-12-01
Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.
Calibration of sound calibrators: an overview
NASA Astrophysics Data System (ADS)
Milhomem, T. A. B.; Soares, Z. M. D.
2016-07-01
This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.
Multichannel hierarchical image classification using multivariate copulas
NASA Astrophysics Data System (ADS)
Voisin, Aurélie; Krylov, Vladimir A.; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane
2012-03-01
This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered input images. For each class and each input channel, the class-conditional marginal probability density functions are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step, we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.
Multivariable PID control by decoupling
NASA Astrophysics Data System (ADS)
Garrido, Juan; Vázquez, Francisco; Morilla, Fernando
2016-04-01
This paper presents a new methodology to design multivariable proportional-integral-derivative (PID) controllers based on decoupling control. The method is presented for general n × n processes. In the design procedure, an ideal decoupling control with integral action is designed to minimise interactions. It depends on the desired open-loop processes that are specified according to realisability conditions and desired closed-loop performance specifications. These realisability conditions are stated and three common cases to define the open-loop processes are studied and proposed. Then, controller elements are approximated to PID structure. From a practical point of view, the wind-up problem is also considered and a new anti-wind-up scheme for multivariable PID controller is proposed. Comparisons with other works demonstrate the effectiveness of the methodology through the use of several simulation examples and an experimental lab process.
Oscillation pressure device for dynamic calibration of pressure transducers
NASA Technical Reports Server (NTRS)
Hess, Robert W. (Inventor); Davis, William T. (Inventor); Davis, Pamela A. (Inventor)
1987-01-01
Method and apparatus for obtaining dynamic calibrations of pressure transducers. A calibration head (15), a flexible tubing (23) and a bellows (20) enclose a volume of air at atmospheric pressure with a transducer (11) to be calibrated subject to the pressure inside the volume. All of the other apparatus in the drawing apply oscillations to bellows (20) causing the volume to change thereby applying oscillating pressures to transducer (11) whereby transducer (11) can be calibrated.
A multivariate Baltic Sea environmental index.
Dippner, Joachim W; Kornilovs, Georgs; Junker, Karin
2012-11-01
Since 2001/2002, the correlation between North Atlantic Oscillation index and biological variables in the North Sea and Baltic Sea fails, which might be addressed to a global climate regime shift. To understand inter-annual and inter-decadal variability in environmental variables, a new multivariate index for the Baltic Sea is developed and presented here. The multivariate Baltic Sea Environmental (BSE) index is defined as the 1st principal component score of four z-transformed time series: the Arctic Oscillation index, the salinity between 120 and 200 m in the Gotland Sea, the integrated river runoff of all rivers draining into the Baltic Sea, and the relative vorticity of geostrophic wind over the Baltic Sea area. A statistical downscaling technique has been applied to project different climate indices to the sea surface temperature in the Gotland, to the Landsort gauge, and the sea ice extent. The new BSE index shows a better performance than all other climate indices and is equivalent to the Chen index for physical properties. An application of the new index to zooplankton time series from the central Baltic Sea (Latvian EEZ) shows an excellent skill in potential predictability of environmental time series.
Multivariate statistical analysis of environmental monitoring data
Ross, D.L.
1997-11-01
EPA requires statistical procedures to determine whether soil or ground water adjacent to or below waste units is contaminated. These statistical procedures are often based on comparisons between two sets of data: one representing background conditions, and one representing site conditions. Since statistical requirements were originally promulgated in the 1980s, EPA has made several improvements and modifications. There are, however, problems which remain. One problem is that the regulations do not require a minimum probability that contaminated sites will be correctly identified. Another problems is that the effect of testing several correlated constituents on the probable outcome of the statistical tests has not been quantified. Results from computer simulations to determine power functions for realistic monitoring situations are presented here. Power functions for two different statistical procedures: the Student`s t-test, and the multivariate Hotelling`s T{sup 2} test, are compared. The comparisons indicate that the multivariate test is often more powerful when the tests are applied with significance levels to control the probability of falsely identifying clean sites as contaminated. This program could also be used to verify that statistical procedures achieve some minimum power standard at a regulated waste unit.
The Optimization of Multivariate Generalizability Studies with Budget Constraints.
ERIC Educational Resources Information Center
Marcoulides, George A.; Goldstein, Zvi
1992-01-01
A method is presented for determining the optimal number of conditions to use in multivariate-multifacet generalizability designs when resource constraints are imposed. A decision maker can determine the number of observations needed to obtain the largest possible generalizability coefficient. The procedure easily applies to the univariate case.…
Multivariate-normality goodness-of-fit tests
NASA Technical Reports Server (NTRS)
Falls, L. W.; Crutcher, H. L.
1977-01-01
Computer program applies chi-square Pearson test to multivariate statistics for application in any field in which data of two or more variables (dimensions) are sampled for statistical purposes. Program handles dimensions two through five, with up to thousand data sets.
Univariate Analysis of Multivariate Outcomes in Educational Psychology.
ERIC Educational Resources Information Center
Hubble, L. M.
1984-01-01
The author examined the prevalence of multiple operational definitions of outcome constructs and an estimate of the incidence of Type I error rates when univariate procedures were applied to multiple variables in educational psychology. Multiple operational definitions of constructs were advocated and wider use of multivariate analysis was…
Multivariate classification of infrared spectra of cell and tissue samples
Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.
1997-01-01
Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.
Blasco, F; Medina-Hernández, M J; Sagrado, S; Fernández, F M
1997-07-01
A method for simultaneous spectrophotometric determination of calcium and magnesium in mineral waters using multivariate calibration methods is proposed. The method is based on the development of the reaction between the analytes and Methylthymol Blue at pH 11. Two operational modes were used: static (spectral information) and flow injection (FI) (spectral and kinetic information). The selection of variables was studied. A series of synthetic solutions containing different concentrations of calcium and magnesium were used to check the prediction ability of the partial least-squares models. The method was applied to the analysis of mineral waters and the results were compared with those obtained by complexometry. No significant differences at the 95% confidence level were found. The proposed method is simple, accurate and reproducible, and it could be easily adapted as a portable (static mode) or automatic (FI) method.
SSA Sensor Calibration Best Practices
NASA Astrophysics Data System (ADS)
Johnson, T.
Best practices for calibrating orbit determination sensors in general and space situational awareness (SSA) sensors in particular are presented. These practices were developed over the last ten years within AGI and most recently applied to over 70 sensors in AGI's Commercial Space Operations Center (ComSpOC) and the US Air Force Space Command (AFSPC) Space Surveillance Network (SSN) to evaluate and configure new sensors and perform on-going system calibration. They are generally applicable to any SSA sensor and leverage some unique capabilities of an SSA estimation approach using an optimal sequential filter and smoother. Real world results are presented and analyzed.
Image based autodocking without calibration
Sutanto, H.; Sharma, R.; Varma, V.
1997-03-01
The calibration requirements for visual servoing can make it difficult to apply in many real-world situations. One approach to image-based visual servoing without calibration is to dynamically estimate the image Jacobian and use it as the basis for control. However, with the normal motion of a robot toward the goal, the estimation of the image Jacobian deteriorates over time. The authors propose the use of additional exploratory motion to considerably improve the estimation of the image Jacobian. They study the role of such exploratory motion in a visual servoing task. Simulations and experiments with a 6-DOF robot are used to verify the practical feasibility of the approach.
Multivariate Strategies in Functional Magnetic Resonance Imaging
ERIC Educational Resources Information Center
Hansen, Lars Kai
2007-01-01
We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.
Bayesian Local Contamination Models for Multivariate Outliers
Page, Garritt L.; Dunson, David B.
2013-01-01
In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465
Software For Multivariate Bayesian Classification
NASA Technical Reports Server (NTRS)
Saul, Ronald; Laird, Philip; Shelton, Robert
1996-01-01
PHD general-purpose classifier computer program. Uses Bayesian methods to classify vectors of real numbers, based on combination of statistical techniques that include multivariate density estimation, Parzen density kernels, and EM (Expectation Maximization) algorithm. By means of simple graphical interface, user trains classifier to recognize two or more classes of data and then use it to identify new data. Written in ANSI C for Unix systems and optimized for online classification applications. Embedded in another program, or runs by itself using simple graphical-user-interface. Online help files makes program easy to use.
Multivariate optimization of capillary electrophoresis methods: a critical review.
Orlandini, Serena; Gotti, Roberto; Furlanetto, Sandra
2014-01-01
In this article a review on the recent applications of multivariate techniques for optimization of electromigration methods, is presented. Papers published in the period from August 2007 to February 2013, have been taken into consideration. Upon a brief description of each of the involved CE operative modes, the characteristics of the chemometric strategies (type of design, factors and responses) applied to face a number of analytical challenges, are presented. Finally, a critical discussion, giving some practical advices and pointing out the most common issues involved in multivariate set-up of CE methods, is provided.
Steady-state decoupling and design of linear multivariable systems
NASA Technical Reports Server (NTRS)
Thaler, G. J.
1974-01-01
A constructive criterion for decoupling the steady states of a linear time-invariant multivariable system is presented. This criterion consists of a set of inequalities which, when satisfied, will cause the steady states of a system to be decoupled. Stability analysis and a new design technique for such systems are given. A new and simple connection between single-loop and multivariable cases is found. These results are then applied to the compensation design for NASA STOL C-8A aircraft. Both steady-state decoupling and stability are justified through computer simulations.
Hydraulic Calibrator for Strain-Gauge Balances
NASA Technical Reports Server (NTRS)
Skelly, Kenneth; Ballard, John
1987-01-01
Instrument for calibrating strain-gauge balances uses hydraulic actuators and load cells. Eliminates effects of nonparallelism, nonperpendicularity, and changes of cable directions upon vector sums of applied forces. Errors due to cable stretching, pulley friction, and weight inaccuracy also eliminated. New instrument rugged and transportable. Set up quickly. Developed to apply known loads to wind-tunnel models with encapsulated strain-gauge balances, also adapted for use in calibrating dynamometers, load sensors on machinery and laboratory instruments.
Ncube, Somandla; Poliwoda, Anna; Tutu, Hlanganani; Wieczorek, Piotr; Chimuka, Luke
2016-10-15
A liquid phase microextraction based on hollow fibre followed by liquid chromatographic determination was developed for the extraction and quantitation of the hallucinogenic muscimol from urine samples. Method applicability on polar hallucinogens was also tested on two alkaloids, a psychedelic hallucinogen, tryptamine and a polar amino acid, tryptophan which exists in its charged state in the entire pH range. A multivariate design of experiments was used in which a half fractional factorial approach was applied to screen six factors (donor phase pH, acceptor phase HCl concentration, carrier composition, stirring rate, extraction time and salt content) for their extent of vitality in carrier mediated liquid microextractions. Four factors were deemed essential for the effective extraction of each analyte. The vital factors were further optimized for the extraction of single-spiked analyte solutions using a central composite design. When the simultaneous extraction of analytes was performed under universal factor conditions biased towards maximizing the enrichment of muscimol, a good composite desirability value of 0.687 was obtained. The method was finally applied on spiked urine samples with acceptable enrichments of 4.1, 19.7 and 24.1 obtained for muscimol, tryptophan and tryptamine respectively. Matrix-based calibration curves were used to address matrix effects. The r(2) values of the matrix-based linear regression prediction models ranged from 0.9933 to 0.9986. The linearity of the regression line of the matrix-based calibration curves for each analyte was directly linked to the analyte enrichment repeatability which ranged from an RSD value of 8.3-13.1%. Limits of detection for the developed method were 5.12, 3.10 and 0.21ngmL(-1) for muscimol, tryptophan and tryptamine respectively. The developed method has proven to offer a viable alternative for the quantitation of muscimol in human urine samples.
Multivariate image processing technique for noninvasive glucose sensing
NASA Astrophysics Data System (ADS)
Webb, Anthony J.; Cameron, Brent D.
2010-02-01
A potential noninvasive glucose sensing technique was investigated for application towards in vivo glucose monitoring for individuals afflicted with diabetes mellitus. Three dimensional ray tracing simulations using a realistic iris pattern integrated into an advanced human eye model are reported for physiological glucose concentrations ranging between 0 to 500 mg/dL. The anterior chamber of the human eye contains a clear fluid known as the aqueous humor. The optical refractive index of the aqueous humor varies on the order of 1.5x10-4 for a change in glucose concentration of 100 mg/dL. The simulation data was analyzed with a developed multivariate chemometrics procedure that utilizes iris-based images to form a calibration model. Results from these simulations show considerable potential for use of the developed method in the prediction of glucose. For further demonstration, an in vitro eye model was developed to validate the computer based modeling technique. In these experiments, a realistic iris pattern was placed in an analog eye model in which the glucose concentration within the fluid representing the aqueous humor was varied. A series of high resolution digital images were acquired using an optical imaging system. These images were then used to form an in vitro calibration model utilizing the same multivariate chemometric technique demonstrated in the 3-D optical simulations. In general, the developed method exhibits considerable applicability towards its use as an in vivo platform for the noninvasive monitoring of physiological glucose concentration.
A multivariate prediction model for microarray cross-hybridization
Chen, Yian A; Chou, Cheng-Chung; Lu, Xinghua; Slate, Elizabeth H; Peck, Konan; Xu, Wenying; Voit, Eberhard O; Almeida, Jonas S
2006-01-01
Background Expression microarray analysis is one of the most popular molecular diagnostic techniques in the post-genomic era. However, this technique faces the fundamental problem of potential cross-hybridization. This is a pervasive problem for both oligonucleotide and cDNA microarrays; it is considered particularly problematic for the latter. No comprehensive multivariate predictive modeling has been performed to understand how multiple variables contribute to (cross-) hybridization. Results We propose a systematic search strategy using multiple multivariate models [multiple linear regressions, regression trees, and artificial neural network analyses (ANNs)] to select an effective set of predictors for hybridization. We validate this approach on a set of DNA microarrays with cytochrome p450 family genes. The performance of our multiple multivariate models is compared with that of a recently proposed third-order polynomial regression method that uses percent identity as the sole predictor. All multivariate models agree that the 'most contiguous base pairs between probe and target sequences,' rather than percent identity, is the best univariate predictor. The predictive power is improved by inclusion of additional nonlinear effects, in particular target GC content, when regression trees or ANNs are used. Conclusion A systematic multivariate approach is provided to assess the importance of multiple sequence features for hybridization and of relationships among these features. This approach can easily be applied to larger datasets. This will allow future developments of generalized hybridization models that will be able to correct for false-positive cross-hybridization signals in expression experiments. PMID:16509965
Multicomponent seismic noise attenuation with multivariate order statistic filters
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yun; Wang, Xiaokai; Xun, Chao
2016-10-01
The vector relationship between multicomponent seismic data is highly important for multicomponent processing and interpretation, but this vector relationship could be damaged when each component is processed individually. To overcome the drawback of standard component-by-component filtering, multivariate order statistic filters are introduced and extended to attenuate the noise of multicomponent seismic data by treating such dataset as a vector wavefield rather than a set of scalar fields. According to the characteristics of seismic signals, we implement this type of multivariate filtering along local events. First, the optimal local events are recognized according to the similarity between the vector signals which are windowed from neighbouring seismic traces with a sliding time window along each trial trajectory. An efficient strategy is used to reduce the computational cost of similarity measurement for vector signals. Next, one vector sample each from the neighbouring traces are extracted along the optimal local event as the input data for a multivariate filter. Different multivariate filters are optimal for different noise. The multichannel modified trimmed mean (MTM) filter, as one of the multivariate order statistic filters, is applied to synthetic and field multicomponent seismic data to test its performance for attenuating white Gaussian noise. The results indicate that the multichannel MTM filter can attenuate noise while preserving the relative amplitude information of multicomponent seismic data more effectively than a single-channel filter.
Díaz, T Galeano; Durán-Merás, I; Rodríguez Cáceres, M I; Murillo, B Roldán
2006-02-01
This paper deals with the simultaneous determination of the quaternary mixture of tocopherols (alpha-, beta-, gamma-, and delta-T) performed using fluorimetric techniques and partial least squares (PLS-1) multivariate analysis. In this study, PLS-1 was applied to matrices made up of fluorescence excitation and emission spectra (EEM) and with fluorescence excitation, emission, and synchronous spectra (EESM) of tocopherols dissolved in hexane: diethyl ether (70:30 v/v). A calibration set of 55 samples based in a central composite plus a full factorial plus a fractionated factorial design was constructed. When synthetic samples were analyzed, recoveries around 100% were obtained and detection limits were calculated using EEM and EESM. For the analysis of the oils, the samples, diluted in hexane, were cleaned in silica cartridges and tocopherols were eluted with hexane: diethyl ether (90:10 v/v). The developed method was applied to different edible oils. The results are satisfactory for alpha-, beta-, and gamma-, but they are worse for delta-T.
NASA Technical Reports Server (NTRS)
Chen, Siqi; Cheng, Yang; Willson, Reg
2006-01-01
Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.
[A multivariate nonlinear model for quantitative analysis in laser-induced breakdown spectroscopy].
Chen, Xing-Long; Fu, Hong-Bo; Wang, Jing-Ge; Ni, Zhi-Bo; He, Wen-Gan; Xu, Jun; Rao Rui-zhong; Dong, Rui-Zhong
2014-11-01
Most quantitative models used in laser-induced breakdown spectroscopy (LIBS) are based on the hypothesis that laser-induced plasma approaches the state of local thermal equilibrium (LTE). However, the local equilibrium is possible only at a specific time segment during the evolution. As the populations of each energy level does not follow Boltzmann distribution in non-LTE condition, those quantitative models using single spectral line would be inaccurate. A multivariate nonlinear model, in which the LTE is not required, was proposed in this article to reduce the signal fluctuation and improve the accuracy of quantitative analysis. This multivariate nonlinear model was compared with the internal calibration model which is based on the LTE condition. The content of Mn in steel samples was determined by using the two models, respectively. A minor error and a minor relative standard deviation (RSD) were observed in multivariate nonlinear model. This result demonstrates that multivariate nonlinear model can improve measurement accuracy and repeatability.
NASA Astrophysics Data System (ADS)
Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin
2016-09-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.
Analytical multicollimator camera calibration
Tayman, W.P.
1978-01-01
Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.
NASA Technical Reports Server (NTRS)
Robertson, G.
1982-01-01
Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.
Standalone Calibration Toolset
NASA Astrophysics Data System (ADS)
Cooper, M.
2013-12-01
Radioxenon measurements require a well calibrated nuclear detector, which typically requires several weeks to perform a complex analysis of the resulting data to determine the detection efficiencies. To reduce the need to have an expert in nuclear physics, PNNL has developed a Standalone Calibration Toolset (SCT), which will aid an analyst in B-y nuclear detector calibration. SCT takes data generated from measurement of isotopically pure calibration samples: Xe-135, Xe-133, Xe-133m and Xe-131m, and generates nuclear detector configuration files. This will result in a simplified calibration and will make verification and corrections to b-g detectors routine.
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.; Yueh, Fang-Yu; Singh, Jagdish P.
2011-09-07
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using the leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
NASA Astrophysics Data System (ADS)
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies
Residual gas analyzer calibration
NASA Technical Reports Server (NTRS)
Lilienkamp, R. H.
1972-01-01
A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.
Multivariate Time Series Similarity Searching
Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng
2014-01-01
Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665
Multivariate time series similarity searching.
Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng
2014-01-01
Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665
Radio interferometric calibration via ordered-subsets algorithms: OS-LS and OS-SAGE calibrations
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.; Zaroubi, S.
2013-10-01
The main objective of this work is to accelerate the maximum likelihood (ML) estimation procedure in radio interferometric calibration. We introduce the ordered-subsets-least-squares (OS-LS) and the ordered-subsets-space alternating generalized expectation (OS-SAGE) radio interferometric calibration methods, as a combination of the OS method with the LS and SAGE maximization calibration techniques, respectively. The OS algorithm speeds up the ML estimation and achieves nearly the same level of accuracy of solutions as the one obtained by the non-OS methods. We apply the OS-LS and OS-SAGE calibration methods to simulated observations and show that these methods have a much higher convergence rate relative to the conventional LS and SAGE techniques. Moreover, the obtained results show that the OS-SAGE calibration technique has a superior performance compared to the OS-LS calibration method in the sense of achieving more accurate results while having significantly less computational cost.
Review of robust multivariate statistical methods in high dimension.
Filzmoser, Peter; Todorov, Valentin
2011-10-31
General ideas of robust statistics, and specifically robust statistical methods for calibration and dimension reduction are discussed. The emphasis is on analyzing high-dimensional data. The discussed methods are applied using the packages chemometrics and rrcov of the statistical software environment R. It is demonstrated how the functions can be applied to real high-dimensional data from chemometrics, and how the results can be interpreted.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies.
Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Method of calibrating clutches in transmissions
Bulgrien, G.H.
1991-02-05
This paper describes a microprocessor controlling a shuttle shift transmission programmed to effect a calibration of the final drive clutches in the transmission so that the microprocessor can efficiently effect engagement of each respective clutch by applying the proper hydraulic pressure to cause proper engagement thereof. This method of calibrating the final drive clutches in the transmission includes braking the output shaft of the transmission so that any engagement of the selected final drive clutch being calibrated will cause a load to be applied to the engine. The hydraulic pressure is then incrementally increased until the engine RPM's decrease because of the load being placed on the engine. The value of this engagement hydraulic pressure is stored in the microprocessor for use when effecting engagement of the selected clutch during operation of the transmission. Service indicators are programmed into the microprocessor should the selected clutch not be capable of being calibrated.
Wireless Inclinometer Calibration System
NASA Technical Reports Server (NTRS)
2008-01-01
A special system was fabricated to properly calibrate the wireless inclinometer, a new device that will measure the Orbiter s hang angle. The wireless inclinometer has a unique design and method of attachment to the Orbiter that will improve the accuracy of the measurements, as well as the safety and ease of the operation. The system properly calibrates the four attached inclinometers, in both the horizontal and vertical axes, without needing to remove any of the component parts. The Wireless Inclinometer Calibration System combines (1) a calibration fixture that emulates the point of attachment to the Orbiter in both the horizontal and vertical axes and the measurement surfaces, (2) an application-specific software program that accepts calibration data such as dates, zero functions, or offsets and tables, and (3) a wireless interface module that enables the wireless inclinometer to communicate with a calibration PC.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
Method of Calibrating a Force Balance
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)
2015-01-01
A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.
Mardia's Multivariate Kurtosis with Missing Data
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Lambert, Paul L.; Fouladi, Rachel T.
2004-01-01
Mardia's measure of multivariate kurtosis has been implemented in many statistical packages commonly used by social scientists. It provides important information on whether a commonly used multivariate procedure is appropriate for inference. Many statistical packages also have options for missing data. However, there is no procedure for applying…
Radiometer Calibration and Characterization
1994-12-31
The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating solar radiometers using techniques that reduce measurement uncertainty and better characterize a radiometers response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instruments responsivity.
Estimating the decomposition of predictive information in multivariate systems
NASA Astrophysics Data System (ADS)
Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele
2015-03-01
In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
Multivariate pluvial flood damage models
Van Ootegem, Luc; Verhofstadt, Elsy; Van Herck, Kristine; Creten, Tom
2015-09-15
Depth–damage-functions, relating the monetary flood damage to the depth of the inundation, are commonly used in the case of fluvial floods (floods caused by a river overflowing). We construct four multivariate damage models for pluvial floods (caused by extreme rainfall) by differentiating on the one hand between ground floor floods and basement floods and on the other hand between damage to residential buildings and damage to housing contents. We do not only take into account the effect of flood-depth on damage, but also incorporate the effects of non-hazard indicators (building characteristics, behavioural indicators and socio-economic variables). By using a Tobit-estimation technique on identified victims of pluvial floods in Flanders (Belgium), we take into account the effect of cases of reported zero damage. Our results show that the flood depth is an important predictor of damage, but with a diverging impact between ground floor floods and basement floods. Also non-hazard indicators are important. For example being aware of the risk just before the water enters the building reduces content damage considerably, underlining the importance of warning systems and policy in this case of pluvial floods. - Highlights: • Prediction of damage of pluvial floods using also non-hazard information • We include ‘no damage cases’ using a Tobit model. • The damage of flood depth is stronger for ground floor than for basement floods. • Non-hazard indicators are especially important for content damage. • Potential gain of policies that increase awareness of flood risks.
Down force calibration stand test report
BOGER, R.M.
1999-08-13
The Down Force Calibration Stand was developed to provide an improved means of calibrating equipment used to apply, display and record Core Sample Truck (CST) down force. Originally, four springs were used in parallel to provide a system of resistance that allowed increasing force over increasing displacement. This spring system, though originally deemed adequate, was eventually found to be unstable laterally. For this reason, it was determined that a new method for resisting down force was needed.
Lachenmeier, Dirk W; Kessler, Waltraud
2008-07-23
In the analysis of food additives, past emphasis was put on the development of chromatographic techniques to separate target components from a complex matrix. Especially in the case of artificial food colors, direct spectrophotometric measurement was seen to lack in specificity due to a high spectral overlap between different components. Multivariate curve resolution (MCR) may be used to overcome this limitation. MCR is able to (i) extract from a complex spectral feature the number of involved components, (ii) attribute the resulting spectra to chemical compounds, and (iii) quantify the individual spectral contributions with or without a priori knowledge. We have evaluated MCR for the routine analysis of yellow and blue food colors in absinthe spirits. Using calibration standards, we were able to show that MCR equally performs as compared to partial least-squares regression but with much improved chemical information contained in the predicted spectra. MCR was then applied to an authentic collective of different absinthes. As confirmed by reference analytics, the food colors were correctly assigned with a sensitivity of 0.93 and a specificity of 0.85. Besides the artificial colors, the algorithm detected a further component in some samples that could be assigned to natural coloring from chlorophyll.
NASA Astrophysics Data System (ADS)
Kent, S. M.
2016-05-01
This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.
Apparatus and system for multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2003-06-24
An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.
Multivariate sensitivity to voice during auditory categorization
Peelle, Jonathan E.; Kraemer, David; Lloyd, Samuel; Granger, Richard
2015-01-01
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. PMID:26245316
Multivariate sensitivity to voice during auditory categorization.
Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard
2015-09-01
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. PMID:26245316
Solar-Reflectance-Based Calibration of Spectral Radiometers
NASA Technical Reports Server (NTRS)
Cattrall, Christopher; Carder, Kendall L.; Thome, Kurtis J.; Gordon, Howard R.
2001-01-01
A method by which to calibrate a spectral radiometer using the sun as the illumination source is discussed. Solar-based calibrations eliminate several uncertainties associated with applying a lamp-based calibration to field measurements. The procedure requires only a calibrated reflectance panel, relatively low aerosol optical depth, and measurements of atmospheric transmittance. Further, a solar-reflectance-based calibration (SRBC), by eliminating the need for extraterrestrial irradiance spectra, reduces calibration uncertainty to approximately 2.2% across the solar-reflective spectrum, significantly reducing uncertainty in measurements used to deduce the optical properties of a system illuminated by the sun (e.g., sky radiance). The procedure is very suitable for on-site calibration of long-term field instruments, thereby reducing the logistics and costs associated with transporting a radiometer to a calibration facility.
Toward Millimagnitude Photometric Calibration (Abstract)
NASA Astrophysics Data System (ADS)
Dose, E.
2014-12-01
(Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.
Welch, J; /SLAC
2010-11-24
indicated that the vacuum chamber was in fact in the proper position with respect to the magnet - not 19 mm off to one side - so the former possibility was discounted. Review of the Fiducial Report and an interview with Keith Caban convinced me that there was no error in the coordinate system used for magnet measurements. I went and interviewed Andrew Fischer who did the magnetic measurements of BXS. He had extensive records, including photographs of the setups and could quickly answer quite detailed questions about how the measurement was done. Before the interview, I had a suspicion there might have been a sign flip in the x coordinate which because of the wedge would result in the wrong path length and a miscalibration. Andrew was able to pin-point how this could have happened and later confirmed it by looking an measurement data from the BXG magnet done just after BXS and comparing photographs. It turned out that the sign of the horizontal stage travel that drives the measurement wire was opposite that of the x coordinate in the Traveler, and the sign difference wasn't applied to the data. The origin x = 0 was set up correctly, but the wire moved in the opposite direction to what was expected, just as if the arc had been flipped over about the origin. To quantitatively confirm that this was the cause of the observed difference in calibration I used the 'grid data', which was taken with a Hall probe on the BXS magnet originally to measure the FINT (focusing effect) term, and combined it with the Hall probe data taken on the flipped trajectory, and performed the field integral on a path that should give the same result as the design path. This is best illustrated in Figure 2. The integration path is coincident with the desired path from the pivot points (x = 0) outward. Between the pivot points the integration path is a mirror image of the design path, but because the magnet is fairly uniform, for this portion it gives the same result. Most of the calibration error
Implementation Challenges for Multivariable Control: What You Did Not Learn in School
NASA Technical Reports Server (NTRS)
Garg, Sanjay
2008-01-01
Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation.
Photogrammetric camera calibration
Tayman, W.P.; Ziemann, H.
1984-01-01
Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.
Calibration and validation areas
NASA Astrophysics Data System (ADS)
Menard, Y.
1984-08-01
Difficulties in calibrating the SEASAT altimeter using the Bermuda laser site are recalled, and the use of Dakar (Senegal) for altimeter calibration is discussed. The site is flat, has clear skies for 200 to 250 days per year, and a local tide model is available. Atmospheric parameters can be studied using existing facilities with two additional weather stations.
NASA Technical Reports Server (NTRS)
Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff
2011-01-01
Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI
Sandia WIPP calibration traceability
Schuhen, M.D.; Dean, T.A.
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Multivariate diagnostics and anomaly detection for nuclear safeguards
Burr, T.; Jones, J.; Wangen, L.
1994-08-01
For process control and other reasons, new and future nuclear reprocessing plants are expected to be increasingly more automated than older plants. As a consequence of this automation, the quantity of data potentially available for safeguards may be much greater in future reprocessing plants than in current plants. The authors first review recent literature that applies multivariate Shewhart and multivariate cumulative sum (Cusum) tests to detect anomalous data. These tests are used to evaluate residuals obtained from a simulated three-tank problem in which five variables (volume, density, and concentrations of uranium, plutonium, and nitric acid) in each tank are modeled and measured. They then present results from several simulations involving transfers between the tanks and between the tanks and the environment. Residuals from a no-fault problem in which the measurements and model predictions are both correct are used to develop Cusum test parameters which are then used to test for faults for several simulated anomalous situations, such as an unknown leak or diversion of material from one of the tanks. The leak can be detected by comparing measurements, which estimate the true state of the tank system, with the model predictions, which estimate the state of the tank system as it ``should`` be. The no-fault simulation compares false alarm behavior for the various tests, whereas the anomalous problems allow one to compare the power of the various tests to detect faults under possible diversion scenarios. For comparison with the multivariate tests, univariate tests are also applied to the residuals.
Multivariate correction in laser-enhanced ionization with laser sampling
NASA Astrophysics Data System (ADS)
Popov, A. M.; Labutin, T. A.; Sychev, D. N.; Gorbatenko, A. A.; Zorov, N. B.
2007-03-01
The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates ( x and y axes — the RS values, z axis — LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition.
Hegazy, Maha A; Abbas, Samah S; Zaazaa, Hala E; Essam, Hebatallah M
2015-01-01
The resolving power of spectrophotometric assisted mathematical techniques were demonstrated for the simultaneous determination of perindopril arginin (PER) and amlodipine besylate (AML) in presence of their degradation products. The conventional univariate methods include the absorptivity factor method (AFM) and absorption correction method (ACM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products. In both methods, amlodipine was determined directly at 360 nm in the concentration range of 8-28 μg mL(-1), on the other hand perindopril was determined by AFM at 222.2 nm and by ACM at 208 nm in the concentration range of 10-70 μg mL(-1). Moreover, the applied multivariate calibration methods were able for the determination of perindopril and amlodipine in presence of their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 19 synthetic samples in the concentration ranges of 60-100 μg mL(-1) perindopril and 20-40 μg mL(-1) amlodipine. Commercially available tablet formulations were successfully analysed using the developed methods without interference from other dosage form additives except PLS model, which failed to determine both drugs in their pharmaceutical dosage form. PMID:26123511
Calibration method for spectroscopic systems
Sandison, David R.
1998-01-01
Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.
Calibration method for spectroscopic systems
Sandison, D.R.
1998-11-17
Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.
Multivariate permutation test to compare survival curves for matched data
2013-01-01
Background In the absence of randomization, the comparison of an experimental treatment with respect to the standard may be done based on a matched design. When there is a limited set of cases receiving the experimental treatment, matching of a proper set of controls in a non fixed proportion is convenient. Methods In order to deal with the highly stratified survival data generated by multiple matching, we extend the multivariate permutation testing approach, since standard nonparametric methods for the comparison of survival curves cannot be applied in this setting. Results We demonstrate the validity of the proposed method with simulations, and we illustrate its application to data from an observational study for the comparison of bone marrow transplantation and chemotherapy in the treatment of paediatric leukaemia. Conclusions The use of the multivariate permutation testing approach is recommended in the highly stratified context of survival matched data, especially when the proportional hazards assumption does not hold. PMID:23399031
A novel definition of the multivariate coefficient of variation.
Albert, Adelin; Zhang, Lixin
2010-10-01
The coefficient of variation CV (%) is widely used to measure the relative variation of a random variable to its mean or to assess and compare the performance of analytical techniques/equipments. A review is made of the existing multivariate extensions of the univariate CV where, instead of a random variable, a random vector is considered, and a novel definition is proposed. The multivariate CV obtained only requires the calculation of the mean vector, the covariance matrix and simple quadratic forms. No matrix inversion is needed which makes the new approach equally attractive in high dimensional as in very small sample size problems. As an illustration, the method is applied to electrophoresis data from external quality assessment in laboratory medicine, to phenotypic characteristics of pocket gophers and to a microarray data set.
Multivariate geostatistical simulation by minimising spatial cross-correlation
NASA Astrophysics Data System (ADS)
Sohrabian, Babak; Tercan, Abdullah Erhan
2014-03-01
Joint simulation of attributes in multivariate geostatistics can be achieved by transforming spatially correlated variables into independent factors. In this study, a new approach for this transformation, Minimum Spatial Cross-correlation (MSC) method, is suggested. The method is based on minimising the sum of squares of cross-variograms at different distances. In the approach, the problem in higher space (N × N) is reduced to N×N-1/2 problems in the two-dimensional space and the reduced problem is solved iteratively using Gradient Descent Algorithm. The method is applied to the joint simulation of a set of multivariate data in a marble quarry and the results are compared with Minimum/Maximum Autocorrelation Factors (MAF) method.
Affymetrix GeneChip microarray preprocessing for multivariate analyses.
McCall, Matthew N; Almudevar, Anthony
2012-09-01
Affymetrix GeneChip microarrays are the most widely used high-throughput technology to measure gene expression, and a wide variety of preprocessing methods have been developed to transform probe intensities reported by a microarray scanner into gene expression estimates. There have been numerous comparisons of these preprocessing methods, focusing on the most common analyses-detection of differential expression and gene or sample clustering. Recently, more complex multivariate analyses, such as gene co-expression, differential co-expression, gene set analysis and network modeling, are becoming more common; however, the same preprocessing methods are typically applied. In this article, we examine the effect of preprocessing methods on some of these multivariate analyses and provide guidance to the user as to which methods are most appropriate.
A method for designing robust multivariable feedback systems
NASA Technical Reports Server (NTRS)
Milich, David Albert; Athans, Michael; Valavani, Lena; Stein, Gunter
1988-01-01
A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the Causality Recovery Methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.
Assessment of opacimeter calibration on kraft pulp mills
NASA Astrophysics Data System (ADS)
Gomes, Joa˜o. F. P.
This paper describes the methodology and specific techniques for calibrating automatic on-line industrial emission analysers, specifically equipments that measure total suspended dust installed in pulp mills within the scope of Portuguese Regulation No. 286/93 on air quality. The calibration of opacimeters is a multi-parameter relationship instead of the bidimensional calibration which is used in industrial practice. For a stationary source from a pulp mill such as the recovery boiler stack, which is subjected to significant variations, the effects of parameters such as the humidity and gas temperature, deviations of isokinetism, size range of particles and characteristic transmittance of equipment are analysed. The multivariable analysis of a considerable set of data leads to an estimate of about 98% of equipment transmittance over the other parameters with a level of significance greater than 0.99 which is a validation of the bidimensional practical calibrations.
Implicit and Explicit Spacecraft Gyro Calibration
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2004-01-01
This paper presents a comparison between two approaches to sensor calibration. According to one approach, called explicit, an estimator compares the sensor readings to reference readings, and uses the difference between the two to estimate the calibration parameters. According to the other approach, called implicit, the sensor error is integrated to form a different entity, which is then compared with a reference quantity of this entity, and the calibration parameters are inferred from the difference. In particular this paper presents the comparison between these approaches when applied to in-flight spacecraft gyro calibration. Reference spacecraft rate is needed for gyro calibration when using the explicit approach; however, such reference rates are not readily available for in-flight calibration. Therefore the calibration parameter-estimator is expanded to include the estimation of that reference rate, which is based on attitude measurements in the form of attitude-quaternion. A comparison between the two approaches is made using simulated data. It is concluded that the performances of the two approaches are basically comparable. Sensitivity tests indicate that the explicit filter results are essentially insensitive to variations in given spacecraft dynamics model parameters.
Calibration of Cryogenic Thermometers for the Lhc
NASA Astrophysics Data System (ADS)
Balle, Ch.; Casas-Cubillos, J.; Vauthier, N.; Thermeau, J. P.
2008-03-01
6000 cryogenic temperature sensors of resistive type covering the range from room temperature down to 1.6 K are installed on the LHC machine. In order to meet the stringent requirements on temperature control of the superconducting magnets, each single sensor needs to be calibrated individually. In the framework of a special contribution, IPN (Institut de Physique Nucléaire) in Orsay, France built and operated a calibration facility with a throughput of 80 thermometers per week. After reception from the manufacturer, the thermometer is first assembled onto a support specific to the measurement environment, and then thermally cycled ten times and calibrated at least once from 1.6 to 300 K. The procedure for each of these interventions includes various measurements and the acquired data is recorded in an ORACLE®-database. Furthermore random calibrations on some samples are executed at CERN to crosscheck the coherence between the approximation data obtained by both IPN and CERN. In the range of 1.5 K to 30 K, the calibration apparatuses at IPN and CERN are traceable to standards maintained in a national metrological laboratory by using a set of rhodium-iron temperature sensors of metrological quality. This paper presents the calibration procedure, the quality assurance applied, the results of the calibration campaigns and the return of experience.
Revised landsat-5 thematic mapper radiometric calibration
Chander, G.; Markham, B.L.; Barsi, J.A.
2007-01-01
Effective April 2, 2007, the radiometric calibration of Landsat-5 (L5) Thematic Mapper (TM) data that are processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) will be updated. The lifetime gain model that was implemented on May 5, 2003, for the reflective bands (1-5, 7) will be replaced by a new lifetime radiometric-calibration curve that is derived from the instrument's response to pseudoinvariant desert sites and from cross calibration with the Landsat-7 (L7) Enhanced TM Plus (ETM+). Although this calibration update applies to all archived and future L5 TM data, the principal improvements in the calibration are for the data acquired during the first eight years of the mission (1984-1991), where the changes in the instrument-gain values are as much as 15%. The radiometric scaling coefficients for bands 1 and 2 for approximately the first eight years of the mission have also been changed. Users will need to apply these new coefficients to convert the calibrated data product digital numbers to radiance. The scaling coefficients for the other bands have not changed. ?? 2007 IEEE.
Gemini facility calibration unit
NASA Astrophysics Data System (ADS)
Ramsay-Howat, Suzanne K.; Harris, John W.; Gostick, David C.; Laidlaw, Ken; Kidd, Norrie; Strachan, Mel; Wilson, Ken
2000-08-01
High-quality, efficient calibration instruments is a pre- requisite for the modern observatory. Each of the Gemini telescopes will be equipped with identical facility calibration units (GCALs) designed to provide wavelength and flat-field calibrations for the suite of instruments. The broad range of instrumentation planned for the telescopes heavily constrains the design of GCAL. Short calibration exposures are required over wavelengths from 0.3micrometers to 5micrometers , field sizes up to 7 arcminutes and spectral resolution from R-5 to 50,000. The output from GCAL must mimic the f-16 beam of the telescope and provide a uniform illumination of the focal plane. The calibration units are mounted on the Gemini Instrument Support Structure, two meters from the focal pane, necessitating the use of large optical components. We will discuss the opto-mechanical design of the Gemini calibration unit, with reference to those feature which allow these stringent requirements to be met. A novel reflector/diffuser unit replaces the integration sphere more normally found in calibration systems. The efficiency of this system is an order of magnitude greater than for an integration sphere. A system of two off-axis mirrors reproduces the telescope pupil and provides the 7 foot focal plane. The results of laboratory test of the uniformity and throughput of the GCAL will be presented.
New technique for calibrating hydrocarbon gas flowmeters
NASA Technical Reports Server (NTRS)
Singh, J. J.; Puster, R. L.
1984-01-01
A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.
Jet energy calibration at the LHC
Schwartzman, Ariel
2015-11-10
In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiplemore » p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.« less
AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS
Amonette, E.L.; Rodgers, G.W.
1958-01-01
An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.
Jet energy calibration at the LHC
Schwartzman, Ariel
2015-11-10
In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiple p–p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.
PACS photometer calibration block analysis
NASA Astrophysics Data System (ADS)
Moór, A.; Müller, T. G.; Kiss, C.; Balog, Z.; Billot, N.; Marton, G.
2014-07-01
The absolute stability of the PACS bolometer response over the entire mission lifetime without applying any corrections is about 0.5 % (standard deviation) or about 8 % peak-to-peak. This fantastic stability allows us to calibrate all scientific measurements by a fixed and time-independent response file, without using any information from the PACS internal calibration sources. However, the analysis of calibration block observations revealed clear correlations of the internal source signals with the evaporator temperature and a signal drift during the first half hour after the cooler recycling. These effects are small, but can be seen in repeated measurements of standard stars. From our analysis we established corrections for both effects which push the stability of the PACS bolometer response to about 0.2 % (stdev) or 2 % in the blue, 3 % in the green and 5 % in the red channel (peak-to-peak). After both corrections we still see a correlation of the signals with PACS FPU temperatures, possibly caused by parasitic heat influences via the Kevlar wires which connect the bolometers with the PACS Focal Plane Unit. No aging effect or degradation of the photometric system during the mission lifetime has been found.
Calibration of acoustic transients.
Burkard, Robert
2006-05-26
This article reviews the appropriate stimulus parameters (click duration, toneburst envelope) that should be used when eliciting auditory brainstem responses from mice. Equipment specifications required to calibrate these acoustic transients are discussed. Several methods of calibrating the level of acoustic transients are presented, including the measurement of peak equivalent sound pressure level (peSPL) and peak sound pressure level (pSPL). It is hoped that those who collect auditory brainstem response thresholds in mice will begin to use standardized methods of acoustic calibration, so that hearing thresholds across mouse strains obtained in different laboratories can more readily be compared.
Dynamic Pressure Calibration Standard
NASA Technical Reports Server (NTRS)
Schutte, P. C.; Cate, K. H.; Young, S. D.
1986-01-01
Vibrating columns of fluid used to calibrate transducers. Dynamic pressure calibration standard developed for calibrating flush diaphragm-mounted pressure transducers. Pressures up to 20 kPa (3 psi) accurately generated over frequency range of 50 to 1,800 Hz. System includes two conically shaped aluminum columns one 5 cm (2 in.) high for low pressures and another 11 cm (4.3 in.) high for higher pressures, each filled with viscous fluid. Each column mounted on armature of vibration exciter, which imparts sinusoidally varying acceleration to fluid column. Signal noise low, and waveform highly dependent on quality of drive signal in vibration exciter.
NASA Astrophysics Data System (ADS)
Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe
2016-06-01
This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.
Airdata Measurement and Calibration
NASA Technical Reports Server (NTRS)
Haering, Edward A., Jr.
1995-01-01
This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.
Compact radiometric microwave calibrator
Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.
2006-06-15
The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency.
DIRBE External Calibrator (DEC)
NASA Technical Reports Server (NTRS)
Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.
1987-01-01
Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Calibration Fixture For Anemometer Probes
NASA Technical Reports Server (NTRS)
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu
2010-01-01
Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC. PMID:20922968
Roundness calibration standard
Burrus, Brice M.
1984-01-01
A roundness calibration standard is provided with a first arc constituting the major portion of a circle and a second arc lying between the remainder of the circle and the chord extending between the ends of said first arc.
C.F. Ahlers, H.H. Liu
2001-12-18
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
NASA Astrophysics Data System (ADS)
Francis, C. R.
1983-02-01
The operating principles and design of a radar altimeter representative of those proposed of ERS-1 are described and geophysical influences on the measurements are discussed. General aspects of calibration are examined, and the critical areas of time and frequency resolution pointed out. A method of internal calibration of delay and backscatter coefficient, by rerouting the tramsitter signal, is described. External prelaunch calibration can be carried out by airborne trials, or using a return signal simulator. It is established that airborne calibration requires high altitudes and high speeds, and is likely to be difficult and expensive. The design of a return signal simulator is shown to be very difficult. No feasible design is identified.
Multivariate Analysis of Ladle Vibration
NASA Astrophysics Data System (ADS)
Yenus, Jaefer; Brooks, Geoffrey; Dunn, Michelle
2016-08-01
The homogeneity of composition and uniformity of temperature of the steel melt before it is transferred to the tundish are crucial in making high-quality steel product. The homogenization process is performed by stirring the melt using inert gas in ladles. Continuous monitoring of this process is important to make sure the action of stirring is constant throughout the ladle. Currently, the stirring process is monitored by process operators who largely rely on visual and acoustic phenomena from the ladle. However, due to lack of measurable signals, the accuracy and suitability of this manual monitoring are problematic. The actual flow of argon gas to the ladle may not be same as the flow gage reading due to leakage along the gas line components. As a result, the actual degree of stirring may not be correctly known. Various researchers have used one-dimensional vibration, and sound and image signals measured from the ladle to predict the degree of stirring inside. They developed online sensors which are indeed to monitor the online stirring phenomena. In this investigation, triaxial vibration signals have been measured from a cold water model which is a model of an industrial ladle. Three flow rate ranges and varying bath heights were used to collect vibration signals. The Fast Fourier Transform was applied to the dataset before it has been analyzed using principal component analysis (PCA) and partial least squares (PLS). PCA was used to unveil the structure in the experimental data. PLS was mainly applied to predict the stirring from the vibration response. It was found that for each flow rate range considered in this study, the informative signals reside in different frequency ranges. The first latent variables in these frequency ranges explain more than 95 pct of the variation in the stirring process for the entire single layer and the double layer data collected from the cold model. PLS analysis in these identified frequency ranges demonstrated that the latent
Perturbative refinement of the geometric calibration in pinhole SPECT.
Defrise, Michel; Vanhove, Christian; Nuyts, Johan
2008-02-01
The paper investigates the geometric calibration of a rotating gamma camera for pinhole (PH) single photon emission computed tomography (SPECT) imaging. Most calibration methods previously applied in PH-SPECT assume that the motion of the camera around the object belongs to a well-defined class described by a small number of geometric parameters, for instance seven parameters for a circular acquisition with a single pinhole camera. The proposed new method refines an initial parametric calibration by applying to each position of the camera a rigid body transformation that is determined to improve the fit between the measured and calculated projections of the calibration sources. A stable estimate of this transformation can be obtained with only three calibration sources by linearizing the equations around the position estimated by the initial parametric calibration. The performance of the method is illustrated using simulated and measured micro-SPECT data.
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure.
Poisson and Multinomial Mixture Models for Multivariate SIMS Image Segmentation
Willse, Alan R.; Tyler, Bonnie
2002-11-08
Multivariate statistical methods have been advocated for analysis of spectral images, such as those obtained with imaging time-of-flight secondary ion mass spectrometry (TOF-SIMS). TOF-SIMS images using total secondary ion counts or secondary ion counts at individual masses often fail to reveal all salient chemical patterns on the surface. Multivariate methods simultaneously analyze peak intensities at all masses. We propose multivariate methods based on Poisson and multinomial mixture models to segment SIMS images into chemically homogeneous regions. The Poisson mixture model is derived from the assumption that secondary ion counts at any mass in a chemically homogeneous region vary according to the Poisson distribution. The multinomial model is derived as a standardized Poisson mixture model, which is analogous to standardizing the data by dividing by total secondary ion counts. The methods are adapted for contextual image segmentation, allowing for spatial correlation of neighboring pixels. The methods are applied to 52 mass units of a SIMS image with known chemical components. The spectral profile and relative prevalence for each chemical phase are obtained from estimates of model parameters.
Application of multivariate outlier detection to fluid velocity measurements
NASA Astrophysics Data System (ADS)
Griffin, John; Schultz, Todd; Holman, Ryan; Ukeiley, Lawrence S.; Cattafesta, Louis N.
2010-07-01
A statistical-based approach to detect outliers in fluid-based velocity measurements is proposed. Outliers are effectively detected from experimental unimodal distributions with the application of an existing multivariate outlier detection algorithm for asymmetric distributions (Hubert and Van der Veeken, J Chemom 22:235-246, 2008). This approach is an extension of previous methods that only apply to symmetric distributions. For fluid velocity measurements, rejection of statistical outliers, meaning erroneous as well as low probability data, via multivariate outlier rejection is compared to a traditional method based on univariate statistics. For particle image velocimetry data, both tests are conducted after application of the current de facto standard spatial filter, the universal outlier detection test (Westerweel and Scarano, Exp Fluids 39:1096-1100, 2005). By doing so, the utility of statistical outlier detection in addition to spatial filters is demonstrated, and further, the differences between multivariate and univariate outlier detection are discussed. Since the proposed technique for outlier detection is an independent process, statistical outlier detection is complementary to spatial outlier detection and can be used as an additional validation tool.
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. PMID:26786791
MIRO Continuum Calibration for Asteroid Mode
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2011-01-01
MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the
Clifford, Harry J.
2011-03-22
A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.
A direct-gradient multivariate index of biotic condition
Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.
2012-01-01
Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972
Barometric calibration of a luminescent oxygen probe.
Golub, Aleksander S; Pittman, Roland N
2016-04-01
The invention of the phosphorescence quenching method for the measurement of oxygen concentration in blood and tissue revolutionized physiological studies of oxygen transport in living organisms. Since the pioneering publication by Vanderkooi and Wilson in 1987, many researchers have contributed to the measurement of oxygen in the microcirculation, to oxygen imaging in tissues and microvessels, and to the development of new extracellular and intracellular phosphorescent probes. However, there is a problem of congruency in data from different laboratories, because of interlaboratory variability of the calibration coefficients in the Stern-Volmer equation. Published calibrations for a common oxygen probe, Pd-porphyrin + bovine serum albumin (BSA), vary because of differences in the techniques used. These methods are used for the formation of oxygen standards: chemical titration, calibrated gas mixtures, and an oxygen electrode. Each method in turn also needs calibration. We have designed a barometric method for the calibration of oxygen probes by using a regulated vacuum to set multiple PO2 standards. The method is fast and accurate and can be applied to biological fluids obtained during or after an experiment. Calibration over the full physiological PO2 range (1-120 mmHg) takes ∼15 min and requires 1-2 mg of probe. PMID:26846556
DIANA NaI-Detector Energy Calibration
NASA Astrophysics Data System (ADS)
O'Connor, Kyle; Elofson, David; Lewis, Codie; O'Brien, Erin; Buggelli, Kelsey; Miller, Nevin; O'Rielly, Grant; Maxtagg Collaboration
2014-09-01
The DIANA detector is being used for measurements of near threshold pion photoproduction and high-energy nuclear Compton scattering being performed at the MAX-lab tagged photon facility in Lund, Sweden. Accurate energy calibrations are essential for determining the final results from both of these experiments. An energy calibration has been performed for DIANA, a single-crystal, large-volume, NaI detector. This calibration was made by placing the detector directly in the tagged photon beam with energies from 145 to 165 MeV and fitting the detector response to the known photon energies. The DIANA crystal is instrumented with 19 PMTs, pedestal corrections were applied and the PMTs were gain matched in order to combine the readout value from each PMT and determine the final detector response. This response was fitted to the tagged photon energies to provide the final energy calibration. The calibrations were performed with two triggers; one from the detector itself and one provided by the photon tagger. The quality of the final calibration fit and the energy resolution of the detector, σ ~ 2 . 4 MeV, will be shown.
40 CFR 1065.310 - Torque calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... lever-arm length. Quantify the lever-arm length, NIST-traceable within ±0.5% uncertainty. The lever arm... torque, NIST-traceable within ±1% uncertainty, and account for it as part of the reference torque. (c...% uncertainty. (1) Dead-weight calibration. This technique applies a known force by hanging known weights at...
NASA Astrophysics Data System (ADS)
Hu, Wei; Si, Bing Cheng
2016-08-01
The scale-specific and localized bivariate relationships in geosciences can be revealed using bivariate wavelet coherence. The objective of this study was to develop a multiple wavelet coherence method for examining scale-specific and localized multivariate relationships. Stationary and non-stationary artificial data sets, generated with the response variable as the summation of five predictor variables (cosine waves) with different scales, were used to test the new method. Comparisons were also conducted using existing multivariate methods, including multiple spectral coherence and multivariate empirical mode decomposition (MEMD). Results show that multiple spectral coherence is unable to identify localized multivariate relationships, and underestimates the scale-specific multivariate relationships for non-stationary processes. The MEMD method was able to separate all variables into components at the same set of scales, revealing scale-specific relationships when combined with multiple correlation coefficients, but has the same weakness as multiple spectral coherence. However, multiple wavelet coherences are able to identify scale-specific and localized multivariate relationships, as they are close to 1 at multiple scales and locations corresponding to those of predictor variables. Therefore, multiple wavelet coherence outperforms other common multivariate methods. Multiple wavelet coherence was applied to a real data set and revealed the optimal combination of factors for explaining temporal variation of free water evaporation at the Changwu site in China at multiple scale-location domains. Matlab codes for multiple wavelet coherence were developed and are provided in the Supplement.
Depth-based hotspot identification and multivariate ranking using the full Bayes approach.
El-Basyouny, Karim; Sayed, Tarek
2013-01-01
Although the multivariate structure of traffic accidents has been recognized in the safety literature for over a decade now, univariate identification and ranking of hotspots is still dominant. The present paper advocates the use of multivariate identification and ranking of hotspots based on statistical depth functions, which are useful tools for non-parametric multivariate analysis as they provide center-out ordering of multivariate data. Thus, a depth-based multivariate method is proposed for the identification and ranking of hotspots using the full Bayes (FB) approach. The proposed method is applied to a sample of 236 signalized intersections in the Greater Vancouver Area. Various multivariate Poisson log-normal (MVPLN) models were used for data analysis. For each model, the FB posterior estimates were obtained using the Markov Chains Monte Carlo (MCMC) techniques and several goodness-of-fit measures were used for model selection. Using a depth threshold of 0.025, the proposed method identified 26 intersections (11%) as potential hotspots. The choice of a depth threshold is a delicate decision and it is suggested to determine the threshold according to the amount of funding available for safety improvement, which is the usual practice in univariate hotspot identification (HSID). Also, the results show that the performance of the proposed multivariate depth-based FB HSID method is superior to that of an analogous method based on the depths of accident frequency (AF) in terms of sensitivity, specificity and the sum of norms (lengths) of Poisson mean vectors. PMID:23018036
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Calculations for Calibration of a Mass Spectrometer
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2008-01-01
A computer program performs calculations to calibrate a quadrupole mass spectrometer in an instrumentation system for identifying trace amounts of organic chemicals in air. In the operation of the mass spectrometer, the mass-to-charge ratio (m/z) of ions being counted at a given instant of time is a function of the instantaneous value of a repeating ramp voltage waveform applied to electrodes. The count rate as a function of time can be converted to an m/z spectrum (equivalent to a mass spectrum for singly charged ions), provided that a calibration of m/z is available. The present computer program can perform the calibration in either or both of two ways: (1) Following a data-based approach, it can utilize the count-rate peaks and the times thereof measured when fed with air containing known organic compounds. (2) It can utilize a theoretical proportionality between the instantaneous m/z and the instantaneous value of an oscillating applied voltage. The program can also estimate the error of the calibration performed by the data-based approach. If calibrations are performed in both ways, then the results can be compared to obtain further estimates of errors.
Psychophysical contrast calibration
To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli
2013-01-01
Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843
Polarimetric Palsar Calibration
NASA Astrophysics Data System (ADS)
Touzi, R.; Shimada, M.
2008-11-01
Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the cross-polarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
POLCAL - POLARIMETRIC RADAR CALIBRATION
NASA Technical Reports Server (NTRS)
Vanzyl, J.
1994-01-01
Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the
A Review of Sensor Calibration Monitoring for Calibration Interval Extension in Nuclear Power Plants
Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara
2012-08-31
Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. Online monitoring can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. International application of calibration monitoring, such as at the Sizewell B plant in United Kingdom, has shown that sensors may operate for eight years, or longer, within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This report presents a state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and online monitoring algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several needs are identified, including the quantification of uncertainty in online calibration assessment; accurate determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity. Understanding the degradation of sensors and the impact of this degradation on signals is key to
Calibration Monitoring for Sensor Calibration Interval Extension: Gaps in the Current Science Base
Coble, Jamie B.; Ramuhalli, Pradeep; Meyer, Ryan M.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara
2012-10-09
Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. International application of calibration monitoring has shown that sensors may operate for longer periods within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. Online monitoring (OLM) can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of OLM for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This paper summarizes a recent state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and OLM algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several technical needs were identified, including an understanding of the impacts of sensor degradation on measurements for both conventional and emerging sensors; the quantification of uncertainty in online calibration assessment; determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity.
A miniature remote deadweight calibrator
NASA Astrophysics Data System (ADS)
Supplee, Frank H., Jr.; Tcheng, Ping
A miniature, computer-controlled, deadweight calibrator was developed to remotely calibrate a force transducer mounted in a cryogenic chamber. This simple mechanism allows automatic loading and unloading of deadweights placed onto a skin friction balance during calibrations. Equipment for the calibrator includes a specially designed set of five interlocking 200-milligram weights, a motorized lifting platform, and a controller box taking commands from a microcomputer on an IEEE interface. The computer is also used to record and reduce the calibration data and control other calibration parameters. The full-scale load for this device is 1,000 milligrams; however, the concept can be extended to accommodate other calibration ranges.
Multivariate data analysis for outcome studies.
Spector, P E
1981-02-01
The use of multivariate statistical techniques for analyzing the complex data often gathered in outcome studies is discussed. The multivariate analysis of variance (MANOVA) is suggested for multiple group studies common to outcome studies. This technique can be utilized for a large number of specific research designs whenever multiple outcome measures are collected. MANOVA offers two specific advantages over more familiar univariate approaches: it presents better control over Type 1 error rates while preserving statistical power, and it allows more thorough analysis of complex data. PMID:7223728
40 CFR 1066.130 - Measurement instrument calibrations and verifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CVS flow meters calibrated volumetrically as described in § 1066.140. 40 CFR 1065.345: Vacuum leak... an emissions test and after maintenance such as pre-filter changes. 40 CFR 1065.350(c), 1065.355(c... measurement instrument calibration and verification requirements in 40 CFR part 1065, subpart D, apply...
Calibration and Temperature Profile of a Tungsten Filament Lamp
ERIC Educational Resources Information Center
de Izarra, Charles; Gitton, Jean-Michel
2010-01-01
The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…
Calibration Of Airborne Visible/IR Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Vane, G. A.; Chrien, T. G.; Miller, E. A.; Reimer, J. H.
1990-01-01
Paper describes laboratory spectral and radiometric calibration of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) applied to all AVIRIS science data collected in 1987. Describes instrumentation and procedures used and demonstrates that calibration accuracy achieved exceeds design requirements. Developed for use in remote-sensing studies in such disciplines as botany, geology, hydrology, and oceanography.
Calibration validation revisited or how to make better use of available data: Sub-period calibration
NASA Astrophysics Data System (ADS)
Gharari, S.; Hrachowitz, M.; Fenicia, F.; Savenije, H.
2012-12-01
Parameter identification of conceptual hydrological models depends largely on calibration, as model parameters are typically non-measurable quantities. For hydrological modeling the identification of "realistic" parameter sets is a key objective. As a model is intended to be used for prediction in future it is also crucial that the model parameters be time transposable. However, previous studies showed that the "best" parameter set can significantly vary over time. Instead of using the "best fit", this study introduces sub-period (SuPer) calibration as a new framework to identify the most "time consistent" parameterization, although potentially sub-optimal in the calibration period. The SuPer calibration framework includes two steps. First, the time series is split into different sub-periods, such as years or seasons. Then the model is calibrated separately for each sub-period and a Pareto front is obtained as the "best fit" for every sub-period. In the second step those parameter sets are selected that minimize the distance to the Pareto front of each sub-period, which involves an additional multi-objective optimization problem with dimensions equal to the number of sub-periods. The performance of the SuPer calibration framework is evaluated and compared with traditional calibration validation frameworks for two sub-period combinations: 1) Two consecutive years; and 2) Eight consecutive years, as sub-periods. For this evaluation we used the HyMOD model applied to the Wark catchment in the Grand Duchy of Luxembourg. We show that besides being a calibration framework, this approach has also diagnostic capabilities. It can in fact indicate the parameter sets that perform consistently well for all the sub-periods while it does not require subjective thresholds for defining behavioral parameter sets. It appears that SuPer calibration leads to feasible parameter ranges for the individual sub-periods which differ from parameter ranges defined by traditional model
Alamilla, Francisco; Calcerrada, Matías; García-Ruiz, Carmen; Torre, Mercedes
2013-05-10
The differentiation of blue ballpoint pen inks written on documents through an LA-ICP-MS methodology is proposed. Small common office paper portions containing ink strokes from 21 blue pens of known origin were cut and measured without any sample preparation. In a first step, Mg, Ca and Sr were proposed as internal standards (ISs) and used in order to normalize elemental intensities and subtract background signals from the paper. Then, specific criteria were designed and employed to identify target elements (Li, V, Mn, Co, Ni, Cu, Zn, Zr, Sn, W and Pb) which resulted independent of the IS chosen in a 98% of the cases and allowed a qualitative clustering of the samples. In a second step, an elemental-related ratio (ink ratio) based on the targets previously identified was used to obtain mass independent intensities and perform pairwise comparisons by means of multivariate statistical analyses (MANOVA, Tukey's HSD and T2 Hotelling). This treatment improved the discrimination power (DP) and provided objective results, achieving a complete differentiation among different brands and a partial differentiation within pen inks from the same brands. The designed data treatment, together with the use of multivariate statistical tools, represents an easy and useful tool for differentiating among blue ballpoint pen inks, with hardly sample destruction and without the need for methodological calibrations, being its use potentially advantageous from a forensic-practice standpoint. To test the procedure, it was applied to analyze real handwritten questioned contracts, previously studied by the Department of Forensic Document Exams of the Criminalistics Service of Civil Guard (Spain). The results showed that all questioned ink entries were clustered in the same group, being those different from the remaining ink on the document. PMID:23597731
Piecewise aggregate representations and lower-bound distance functions for multivariate time series
NASA Astrophysics Data System (ADS)
Li, Hailin
2015-06-01
Dimensionality reduction is one of the most important methods to improve the efficiency of the techniques that are applied to the field of multivariate time series data mining. Due to multivariate time series with the variable-based and time-based dimensions, the reduction techniques must take both of them into consideration. To achieve this goal, we use a center sequence to represent a multivariate time series so that the new sequence can be seen as a univariate time series. Thus two sophisticated piecewise aggregate representations, including piecewise aggregate approximation and symbolization applied to univariate time series, are used to further represent the extended sequence that is derived from the center one. Furthermore, some distance functions are designed to measure the similarity between two representations. Through being proven by some related mathematical analysis, the proposed functions are lower bound on Euclidean distance and dynamic time warping. In this way, false dismissals can be avoided when they are used to index the time series. In addition, multivariate time series with different lengths can be transformed into the extended sequences with equal length, and their corresponding distance functions can measure the similarity between two unequal-length multivariate time series. The experimental results demonstrate that the proposed methods can reduce the dimensionality, and their corresponding distance functions satisfy the lower-bound condition, which can speed up the calculation of similarity search and indexing in the multivariate time series datasets.
Calibration Systems Final Report
Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.
2006-02-01
The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.
The Absolute Radiometric Calibration of Space - Sensors.
NASA Astrophysics Data System (ADS)
Holm, Ronald Gene
1987-09-01
The need for absolute radiometric calibration of space-based sensors will continue to increase as new generations of space sensors are developed. A reflectance -based in-flight calibration procedure is used to determine the radiance reaching the entrance pupil of the sensor. This procedure uses ground-based measurements coupled with a radiative transfer code to characterize the effects the atmosphere has on the signal reaching the sensor. The computed radiance is compared to the digital count output of the sensor associated with the image of a test site. This provides an update to the preflight calibration of the system and a check on the on-board internal calibrator. This calibration procedure was used to perform a series of five calibrations of the Landsat-5 Thematic Mapper (TM). For the 12 measurements made in TM bands 1-3, the RMS variation from the mean as a percentage of the mean is (+OR-) 1.9%, and for measurements in the IR, TM bands 4,5, and 7, the value is (+OR-) 3.4%. The RMS variation for all 23 measurements is (+OR-) 2.8%. The absolute calibration techniques were put to another test with a series of three calibration of the SPOT-1 High Resolution Visible, (HRV), sensors. The ratio, HRV-2/HRV-1, of absolute calibration coefficients compared very well with ratios of histogrammed data obtained when the cameras simultaneously imaged the same ground site. Bands PA, B1 and B3 agreed to within 3%, while band B2 showed a 7% difference. The procedure for performing a satellite calibration was then used to demonstrate how a calibrated satellite sensor can be used to quantitatively evaluate surface reflectance over a wide range of surface features. Predicted reflectance factors were compared to values obtained from aircraft -based radiometer data. This procedure was applied on four dates with two different surface conditions per date. A strong correlation, R('2) = .996, was shown between reflectance values determined from satellite imagery and low-flying aircraft
Multivariate distributions of soil hydraulic parameters
NASA Astrophysics Data System (ADS)
Qu, Wei; Pachepsky, Yakov; Huisman, Johan Alexander; Martinez, Gonzalo; Bogena, Heye; Vereecken, Harry
2014-05-01
Statistical distributions of soil hydraulic parameters have to be known when synthetic fields of soil hydraulic properties need to be generated in ensemble modeling of soil water dynamics and soil water content data assimilation. Pedotransfer functions that provide statistical distributions of water retention and hydraulic conductivity parameters for textural classes are most often used in the parameter field generation. Presence of strong correlations can substantially influence the parameter generation results. The objective of this work was to review and evaluate available data on correlations between van Genuchten-Mualem (VGM) model parameters. So far, two different approaches were developed to estimate these correlations. The first approach uses pedotransfer functions to generate VGM parameters for a large number of soil compositions within a textural class, and then computes parameter correlations for each of the textural classes. The second approach computes the VGM parameter correlations directly from parameter values obtained by fitting VGM model to measured water retention and hydraulic conductivity data for soil samples belonging to a textural class. Carsel and Parish (1988) used the Rawls et al. (1982) pedotransfer functions, and Meyer et al. (1997) used the Rosetta pedotransfer algorithms (Schaap, 2002) to develop correlations according to the first approach. We used the UNSODA database (Nemes et al. 2001), the US Southern Plains database (Timlin et al., 1999), and the Belgian database (Vereecken et al., 1989, 1990) to apply the second approach. A substantial number of considerable (>0.7) correlation coefficients were found. Large differences were encountered between parameter correlations obtained with different approaches and different databases for the same textural classes. The first of the two approaches resulted in generally higher values of correlation coefficients between VGM parameters. However, results of the first approach application depend
Redox State of Iron in Lunar Glasses using X-ray Absorption Spectroscopy and Multivariate Analysis
NASA Astrophysics Data System (ADS)
Dyar, M. D.; McCanta, M. C.; Lanzirotti, A.; Sutton, S. R.; Carey, C. J.; Mahadevan, S.; Rutherford, M. J.
2014-12-01
The oxidation state of igneous materials on a planet is a critically-important variable in understanding magma evolution on bodies in our solar system. However, direct and indirect methods for quantifying redox states are challenging, especially across the broad spectrum of silicate glass compositions found on airless bodies. On the Moon, early Mössbauer studies of bulk samples suggested the presence of significant Fe3+ (>10%) in lunar glasses (green, orange, brown); lunar analog glasses synthesized at fO2 <10-11 have similar Fe3+. All these Mössbauer spectra are challenging to interpret due to the presence of multiple coordination environments in the glasses. X-ray absorption spectroscopy (XAS) allows pico- and nano-scale interrogation of primitive planetary materials using the pre-edge, main edge, and EXAFS regions of absorption edge spectra. Current uses of XAS require availability of standards with compositions similar to those of unknowns and complex procedures for curve-fitting of pre-edge features that produce results with poorly constrained accuracy. A new approach to accurate and quantitative redox measurements with XAS is to couple use of spectra from synthetic glass standards covering a broad compositional range with multivariate analysis (MVA) techniques. Mössbauer and XAS spectra from a suite of 33 synthetic glass standards covering a wide range of compositions and fO2(Dyar et al., this meeting) were used to develop a MVA model that utilizes valuable predictive information not only in the major spectral peaks/features, but in all channels of the XAS region. Algorithms for multivariate analysis t were used to "learn" the characteristics of a data set as a function of varying spectral characteristics. These models were applied to the study of lunar glasses, which provide a challenging test case for these newly-developed techniques due to their very low fO2. Application of the new XAS calibration model to Apollo 15 green (15426, 15427 and 15425
Liu, Na; Li, Jun; Li, Bao-Guo
2014-11-01
The study of quality control of Chinese medicine has always been the hot and the difficulty spot of the development of traditional Chinese medicine (TCM), which is also one of the key problems restricting the modernization and internationalization of Chinese medicine. Multivariate statistical analysis is an analytical method which is suitable for the analysis of characteristics of TCM. It has been used widely in the study of quality control of TCM. Multivariate Statistical analysis was used for multivariate indicators and variables that appeared in the study of quality control and had certain correlation between each other, to find out the hidden law or the relationship between the data can be found,.which could apply to serve the decision-making and realize the effective quality evaluation of TCM. In this paper, the application of multivariate statistical analysis in the quality control of Chinese medicine was summarized, which could provided the basis for its further study. PMID:25775806
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
Multivariate permutation entropy and its application for complexity analysis of chaotic systems
NASA Astrophysics Data System (ADS)
He, Shaobo; Sun, Kehui; Wang, Huihai
2016-11-01
To measure the complexity of multivariate systems, the multivariate permutation entropy (MvPE) algorithm is proposed. It is employed to measure complexity of multivariate system in the phase space. As an application, MvPE is applied to analyze the complexity of chaotic systems, including hyperchaotic Hénon map, fractional-order simplified Lorenz system and financial chaotic system. Results show that MvPE algorithm is effective for analyzing the complexity of the multivariate systems. It also shows that fractional-order system does not become more complex with derivative order varying. Compared with PE, MvPE has better robustness for noise and sampling interval, and the results are not affected by different normalization methods.
Technology Transfer Automated Retrieval System (TEKTRAN)
A near-infrared spectroscopy (NIRS) method for automated non-destructive detection of insect infestation internal to small fruit is desirable because of the zero-to-zero tolerance of the fresh and processed fruit markets. Three NIRS instruments: the Ocean Optics SD2000, the Perten DA7000 and the Ori...
Using variable combination population analysis for variable selection in multivariate calibration.
Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song
2015-03-01
Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. PMID:25682424
ERIC Educational Resources Information Center
de Oliveira, Rodrigo R.; das Neves, Luiz S.; de Lima, Kassio M. G.
2012-01-01
A chemometrics course is offered to students in their fifth semester of the chemistry undergraduate program that includes an in-depth project. Students carry out the project over five weeks (three 8-h sessions per week) and conduct it in parallel to other courses or other practical work. The students conduct a literature search, carry out…
Using variable combination population analysis for variable selection in multivariate calibration.
Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song
2015-03-01
Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750.
Iterative Magnetometer Calibration
NASA Technical Reports Server (NTRS)
Sedlak, Joseph
2006-01-01
This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.
Application of Optimal Designs to Item Calibration
Lu, Hung-Yi
2014-01-01
In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318
DUALITY IN MULTIVARIATE RECEPTOR MODEL. (R831078)
Multivariate receptor models are used for source apportionment of multiple observations of compositional data of air pollutants that obey mass conservation. Singular value decomposition of the data leads to two sets of eigenvectors. One set of eigenvectors spans a space in whi...
Using Matlab in a Multivariable Calculus Course.
ERIC Educational Resources Information Center
Schlatter, Mark D.
The benefits of high-level mathematics packages such as Matlab include both a computer algebra system and the ability to provide students with concrete visual examples. This paper discusses how both capabilities of Matlab were used in a multivariate calculus class. Graphical user interfaces which display three-dimensional surfaces, contour plots,…
MBIS: multivariate Bayesian image segmentation tool.
Esteban, Oscar; Wollny, Gert; Gorthi, Subrahmanyam; Ledesma-Carbayo, María-J; Thiran, Jean-Philippe; Santos, Andrés; Bach-Cuadra, Meritxell
2014-07-01
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
NASA Astrophysics Data System (ADS)
Mader, G. L.; Bilich, A. L.
2013-12-01
Since 1994, NGS has computed relative antenna calibrations for more than 350 antenna models used by NGS customers and geodetic networks worldwide. In a 'relative' calibration, the antenna under test is calibrated relative to a standard reference antenna, the AOA D/M_T chokering. The majority of NGS calibrations have been made publicly available at the web site www.ngs.noaa.gov/ANTCAL as well as via the NGS master calibrations file ant_info.003. In the mid-2000's, institutions in Germany began distributing 'absolute' antenna calibrations, where the antenna under test is calibrated independent of any reference antenna. These calibration methods also overcame some limitations of relative calibrations by going to lower elevation angles and capturing azimuthal variations. Soon thereafter (2008), the International GNSS Service (IGS) initiated a geodetic community movement away from relative calibrations and toward absolute calibrations as the defacto standard. The IGS now distributes a catalog of absolute calibrations taken from several institutions, distributed as the IGS master calibrations file igs08.atx. The competing methods and files have raised many questions about when it is or is not valid to process a geodetic network using a combination of relative and absolute calibrations, and if/when it is valid to combine the NGS and IGS catalogs. Therefore, in this study, we compare the NGS catalog of relative calibrations against the IGS catalog of absolute calibrations. As of the writing of this abstract, there are 77 antenna+radome combinations which are common to both the NGS relative and IGS absolute catalogs, spanning 16 years of testing (1997 to present). 50 different antenna models and 8 manufacturers are represented in the study sample. We apply the widely-accepted standard method for converting relative to absolute, then difference the calibrations. Various statistics describe the observed differences between phase center offset (PCO), phase center variation
NASA Technical Reports Server (NTRS)
Djorgovski, S. George
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.
NASA Technical Reports Server (NTRS)
Djorgovski, S. G.
1994-01-01
We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has
Yu, Lei; Lin, Guan-Yu; Chen, Bin
2013-01-01
The present paper studied spectral irradiation responsivities calibration method which can be applied to the far ultraviolet spectrometer for upper atmosphere remote sensing. It is difficult to realize the calibration for far ultraviolet spectrometer for many reasons. Standard instruments for far ultraviolet waveband calibration are few, the degree of the vacuum experiment system is required to be high, the stabilities of the experiment are hardly maintained, and the limitation of the far ultraviolet waveband makes traditional diffuser and the integrating sphere radiance calibration method difficult to be used. To solve these problems, a new absolute spectral irradiance calibration method was studied, which can be applied to the far ultraviolet calibration. We build a corresponding special vacuum experiment system to verify the calibration method. The light source system consists of a calibrated deuterium lamp, a vacuum ultraviolet monochromater and a collimating system. We used the calibrated detector to obtain the irradiance responsivities of it. The three instruments compose the calibration irradiance source. We used the "calibration irradiance source" to illuminate the spectrometer prototype and obtained the spectral irradiance responsivities. It realized the absolute spectral irradiance calibration for the far ultraviolet spectrometer utilizing the calibrated detector. The absolute uncertainty of the calibration is 7.7%. The method is significant for the ground irradiation calibration of the far ultraviolet spectrometer in upper atmosphere remote sensing.
Multivariate linkage analysis of specific language impairment (SLI).
Monaco, Anthony P
2007-09-01
Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.
Experimental evidence for multivariate stabilizing sexual selection.
Brooks, Robert; Hunt, John; Blows, Mark W; Smith, Michael J; Bussière, Luc F; Jennions, Michael D
2005-04-01
Stabilizing selection is a fundamental concept in evolutionary biology. In the presence of a single intermediate optimum phenotype (fitness peak) on the fitness surface, stabilizing selection should cause the population to evolve toward such a peak. This prediction has seldom been tested, particularly for suites of correlated traits. The lack of tests for an evolutionary match between population means and adaptive peaks may be due, at least in part, to problems associated with empirically detecting multivariate stabilizing selection and with testing whether population means are at the peak of multivariate fitness surfaces. Here we show how canonical analysis of the fitness surface, combined with the estimation of confidence regions for stationary points on quadratic response surfaces, may be used to define multivariate stabilizing selection on a suite of traits and to establish whether natural populations reside on the multivariate peak. We manufactured artificial advertisement calls of the male cricket Teleogryllus commodus and played them back to females in laboratory phonotaxis trials to estimate the linear and nonlinear sexual selection that female phonotactic choice imposes on male call structure. Significant nonlinear selection on the major axes of the fitness surface was convex in nature and displayed an intermediate optimum, indicating multivariate stabilizing selection. The mean phenotypes of four independent samples of males, from the same population as the females used in phonotaxis trials, were within the 95% confidence region for the fitness peak. These experiments indicate that stabilizing sexual selection may play an important role in the evolution of male call properties in natural populations of T. commodus.
New approach for the radiometric calibration of spectral imaging systems.
Kohler, David; Bissett, W; Steward, Robert; Davis, Curtiss
2004-05-31
The calibration of multispectral and hyperspectral imaging systems is typically done in the laboratory using an integrating sphere, which usually produces a signal that is red rich. Using such a source to calibrate environmental monitoring systems presents some difficulties. Not only is much of the calibration data outside the range and spectral quality of data values that are expected to be captured in the field, using these measurements alone may exaggerate the optical flaws found within the system. Left unaccounted for, these flaws will become embedded in to the calibration, and thus, they will be passed on to the field data when the calibration is applied. To address these issues, we used a series of well-characterized spectral filters within our calibration. It provided us with a set us stable spectral standards to test and account for inadequacies in the spectral and radiometric integrity of the optical imager.
In situ ultrahigh vacuum residual gas analyzer 'calibration'
Malyshev, O. B.; Middleman, K. J.
2008-11-15
Knowing the residual gas spectrum is essential for many applications and research in ultrahigh vacuum (UHV). Residual gas analyzers (RGAs) are used for both qualitative and quantitative gas analyses, where the quadrupole mass analyzers are now the most popular. It was found that RGAs supplied by different manufacturers are not necessarily well calibrated for quantitative gas analysis. A procedure applied for in situ RGA 'calibration' against a calibrated UHV total pressure gauge is described in this article. It was found that special attention should be paid to H{sub 2} calibration, as RGAs are usually much more sensitive to H{sub 2} than ionization gauges. The calibration coefficients are quite reproducible in Faraday cup mode, however, using the secondary electron multiplier requires frequent checks of the calibration coefficients. The coefficients obtained for the RGA allow the use of the RGA as an accurate device for gas spectrum analysis.
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Thermistor mount efficiency calibration
Cable, J.W.
1980-05-01
Thermistor mount efficiency calibration is accomplished by use of the power equation concept and by complex signal-ratio measurements. A comparison of thermistor mounts at microwave frequencies is made by mixing the reference and the reflected signals to produce a frequency at which the amplitude and phase difference may be readily measured.
Pseudo Linear Gyro Calibration
NASA Technical Reports Server (NTRS)
Harman, Richard; Bar-Itzhack, Itzhack Y.
2003-01-01
Previous high fidelity onboard attitude algorithms estimated only the spacecraft attitude and gyro bias. The desire to promote spacecraft and ground autonomy and improvements in onboard computing power has spurred development of more sophisticated calibration algorithms. Namely, there is a desire to provide for sensor calibration through calibration parameter estimation onboard the spacecraft as well as autonomous estimation on the ground. Gyro calibration is a particularly challenging area of research. There are a variety of gyro devices available for any prospective mission ranging from inexpensive low fidelity gyros with potentially unstable scale factors to much more expensive extremely stable high fidelity units. Much research has been devoted to designing dedicated estimators such as particular Extended Kalman Filter (EKF) algorithms or Square Root Information Filters. This paper builds upon previous attitude, rate, and specialized gyro parameter estimation work performed with Pseudo Linear Kalman Filter (PSELIKA). The PSELIKA advantage is the use of the standard linear Kalman Filter algorithm. A PSELIKA algorithm for an orthogonal gyro set which includes estimates of attitude, rate, gyro misalignments, gyro scale factors, and gyro bias is developed and tested using simulated and flight data. The measurements PSELIKA uses include gyro and quaternion tracker data.
Satellite altimeter calibration techniques
NASA Technical Reports Server (NTRS)
Kolenkiewicz, R.; Martin, C. F.
1990-01-01
This paper examines calibration techniques which can most effectively satisfy the requirements of future satellites carrying high-accuracy radar altimeters, such as the ESA ERS-1 and the NASA/CNES Topex/Poseidon satellites scheduled for launch during the next five years. The calibration accuracies and the advantages and disadvantages of the four currently proposed calibration techniques for over-water calibration are discussed: (1) a tide gauge on a tower at-sea and a nearby laser, (2) a laser and a tide gauge on an island with an offshore satellite pass and a geoid tie between the satellite ground track and the laser, (3) a tide gauge on a tower at-sea with satellite positioning from multiple lasers and a GPS, and (4) a laser and a tide gauge on a tower at-sea. Error budgets for these techniques, developed on the basis of state-of-the-art tracking systems, were found to have one sigma height uncertainties in the 2.8 to 4.9 cm range.
NASA Astrophysics Data System (ADS)
Chen, Christine; Muzerolle, James; Dixon, William Van Dyke; Izela Diaz, Rosa; Bushouse, Howard A.
2015-01-01
The James Webb Space Telescope will launch in 2018 and carry four science instruments that will observe the sky at 0.7 - 29 micron: the Near Infrared Camera (NIRCam), the Near Infrared Imager and Slitless Spectrograph (NIRISS), the Near Infrared Spectrograph (NIRSpec), and the Mid Infrared Instrument (MIRI). The Space Telescope Science Institute (STScI) is currently building a data reduction pipeline that will provide not only basic calibrated data but also higher level science products. All of the JWST detectors will be operated in non-destructive readout mode. Therefore, the first step in the pipeline will be to calculate the slopes of indivudal non-destructive readout ramps or integrations. The next step will be to generate calibrated slope images that are represent the basic calibrated data. The final step will be to combine data taken across multiple integrations and exposure. For the direct imaging and integral field spectroscopy modes, the pipeline will produce calibrated mosaicks. For the coronagraphic modes, the pipeline will produce contrast curves and PSF subtracted images.
Satellite altimeter calibration techniques
NASA Astrophysics Data System (ADS)
Kolenkiewicz, R.; Martin, C. F.
This paper examines calibration techniques which can most effectively satisfy the requirements of future satellites carrying high-accuracy radar altimeters, such as the ESA ERS-1 and the NASA/CNES Topex/Poseidon satellites scheduled for launch during the next five years. The calibration accuracies and the advantages and disadvantages of the four currently proposed calibration techniques for over-water calibration are discussed: (1) a tide gauge on a tower at-sea and a nearby laser, (2) a laser and a tide gauge on an island with an offshore satellite pass and a geoid tie between the satellite ground track and the laser, (3) a tide gauge on a tower at-sea with satellite positioning from multiple lasers and a GPS, and (4) a laser and a tide gauge on a tower at-sea. Error budgets for these techniques, developed on the basis of state-of-the-art tracking systems, were found to have one sigma height uncertainties in the 2.8 to 4.9 cm range.
NVLAP calibration laboratory program
Cigler, J.L.
1993-12-31
This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).
Computerized tomography calibrator
NASA Technical Reports Server (NTRS)
Engel, Herbert P. (Inventor)
1991-01-01
A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.
Calibration Of Oxygen Monitors
NASA Technical Reports Server (NTRS)
Zalenski, M. A.; Rowe, E. L.; Mcphee, J. R.
1988-01-01
Readings corrected for temperature, pressure, and humidity of air. Program for handheld computer developed to ensure accuracy of oxygen monitors in National Transonic Facility, where liquid nitrogen stored. Calibration values, determined daily, based on entries of data on barometric pressure, temperature, and relative humidity. Output provided directly in millivolts.
Pleiades Absolute Calibration : Inflight Calibration Sites and Methodology
NASA Astrophysics Data System (ADS)
Lachérade, S.; Fourest, S.; Gamet, P.; Lebègue, L.
2012-07-01
In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station) and Oceans (Calibration over molecular scattering) or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.
Signal inference with unknown response: Calibration-uncertainty renormalized estimator
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.; Greiner, Maksim; Selig, Marco; Boehm, Vanessa
2015-01-01
The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.
Fast Field Calibration of MIMU Based on the Powell Algorithm
Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang
2014-01-01
The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801
Signal inference with unknown response: calibration-uncertainty renormalized estimator.
Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa
2015-01-01
The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Simplified Vicarious Radiometric Calibration
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Ryan, Robert; Holekamp, Kara; Pagnutti, Mary
2010-01-01
A measurement-based radiance estimation approach for vicarious radiometric calibration of spaceborne multispectral remote sensing systems has been developed. This simplified process eliminates the use of radiative transfer codes and reduces the number of atmospheric assumptions required to perform sensor calibrations. Like prior approaches, the simplified method involves the collection of ground truth data coincident with the overpass of the remote sensing system being calibrated, but this approach differs from the prior techniques in both the nature of the data collected and the manner in which the data are processed. In traditional vicarious radiometric calibration, ground truth data are gathered using ground-viewing spectroradiometers and one or more sun photometer( s), among other instruments, located at a ground target area. The measured data from the ground-based instruments are used in radiative transfer models to estimate the top-of-atmosphere (TOA) target radiances at the time of satellite overpass. These TOA radiances are compared with the satellite sensor readings to radiometrically calibrate the sensor. Traditional vicarious radiometric calibration methods require that an atmospheric model be defined such that the ground-based observations of solar transmission and diffuse-to-global ratios are in close agreement with the radiative transfer code estimation of these parameters. This process is labor-intensive and complex, and can be prone to errors. The errors can be compounded because of approximations in the model and inaccurate assumptions about the radiative coupling between the atmosphere and the terrain. The errors can increase the uncertainty of the TOA radiance estimates used to perform the radiometric calibration. In comparison, the simplified approach does not use atmospheric radiative transfer models and involves fewer assumptions concerning the radiative transfer properties of the atmosphere. This new technique uses two neighboring uniform
Inertial Sensor Error Reduction through Calibration and Sensor Fusion.
Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L
2016-01-01
This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198
Inertial Sensor Error Reduction through Calibration and Sensor Fusion
Lambrecht, Stefan; Nogueira, Samuel L.; Bortole, Magdo; Siqueira, Adriano A. G.; Terra, Marco H.; Rocon, Eduardo; Pons, José L.
2016-01-01
This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198
Martins, Manoel L; Rizzetti, Tiele M; Kemmerich, Magali; Saibt, Nathália; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato
2016-08-19
Among calibration approaches for organic compounds determination in complex matrices, external calibration, based in solutions of the analytes in solvent or in blank matrix extracts, is the most applied approach. Although matrix matched calibration (MMC) can compensates the matrix effects, it does not compensate low recovery results. In this way, standard addition (SA) and procedural standard calibration (PSC) are usual alternatives, despite they generate more sample and/or matrix blanks consumption need, extra sample preparations and higher time and costs. Thus, the goal of this work was to establish a fast and efficient calibration approach, the diluted standard addition calibration (DSAC), based on successive dilutions of a spiked blank sample. In order to evaluate the proposed approach, solvent calibration (SC), MMC, PSC and DSAC were applied to evaluate recovery results of grape blank samples spiked with 66 pesticides. Samples were extracted with the acetate QuEChERS method and the compounds determined by ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Results indicated that low recovery results for some pesticides were compensated by both PSC and DSAC approaches. Considering recoveries from 70 to 120% with RSD <20% as adequate, DSAC presented 83%, 98% and 100% of compounds meeting this criteria for the spiking levels 10, 50 and 100μgkg(-1), respectively. PSC presented same results (83%, 98% and 100%), better than those obtained by MMC (79%, 95% and 97%) and by SC (62%, 70% and 79%). The DSAC strategy showed to be suitable for calibration of multiresidue determination methods, producing adequate results in terms of trueness and is easier and faster to perform than other approaches. PMID:27432791
Martins, Manoel L; Rizzetti, Tiele M; Kemmerich, Magali; Saibt, Nathália; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato
2016-08-19
Among calibration approaches for organic compounds determination in complex matrices, external calibration, based in solutions of the analytes in solvent or in blank matrix extracts, is the most applied approach. Although matrix matched calibration (MMC) can compensates the matrix effects, it does not compensate low recovery results. In this way, standard addition (SA) and procedural standard calibration (PSC) are usual alternatives, despite they generate more sample and/or matrix blanks consumption need, extra sample preparations and higher time and costs. Thus, the goal of this work was to establish a fast and efficient calibration approach, the diluted standard addition calibration (DSAC), based on successive dilutions of a spiked blank sample. In order to evaluate the proposed approach, solvent calibration (SC), MMC, PSC and DSAC were applied to evaluate recovery results of grape blank samples spiked with 66 pesticides. Samples were extracted with the acetate QuEChERS method and the compounds determined by ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Results indicated that low recovery results for some pesticides were compensated by both PSC and DSAC approaches. Considering recoveries from 70 to 120% with RSD <20% as adequate, DSAC presented 83%, 98% and 100% of compounds meeting this criteria for the spiking levels 10, 50 and 100μgkg(-1), respectively. PSC presented same results (83%, 98% and 100%), better than those obtained by MMC (79%, 95% and 97%) and by SC (62%, 70% and 79%). The DSAC strategy showed to be suitable for calibration of multiresidue determination methods, producing adequate results in terms of trueness and is easier and faster to perform than other approaches.
John F. Schabron; Joseph F. Rovani; Susan S. Sorini
2007-03-31
The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.
Uncertainty and calibration analysis
Coutts, D.A.
1991-03-01
All measurements contain some deviation from the true value which is being measured. In the common vernacular this deviation between the true value and the measured value is called an inaccuracy, an error, or a mistake. Since all measurements contain errors, it is necessary to accept that there is a limit to how accurate a measurement can be. The undertainty interval combined with the confidence level, is one measure of the accuracy for a measurement or value. Without a statement of uncertainty (or a similar parameter) it is not possible to evaluate if the accuracy of the measurement, or data, is appropriate. The preparation of technical reports, calibration evaluations, and design calculations should consider the accuracy of measurements and data being used. There are many methods to accomplish this. This report provides a consistent method for the handling of measurement tolerances, calibration evaluations and uncertainty calculations. The SRS Quality Assurance (QA) Program requires that the uncertainty of technical data and instrument calibrations be acknowledged and estimated. The QA Program makes some specific technical requirements related to the subject but does not provide a philosophy or method on how uncertainty should be estimated. This report was prepared to provide a technical basis to support the calculation of uncertainties and the calibration of measurement and test equipment for any activity within the Experimental Thermal-Hydraulics (ETH) Group. The methods proposed in this report provide a graded approach for estimating the uncertainty of measurements, data, and calibrations. The method is based on the national consensus standard, ANSI/ASME PTC 19.1.
Variable Acceleration Force Calibration System (VACS)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.; Johnson, Thomas H.; Landman, Drew
2014-01-01
Conventionally, force balances have been calibrated manually, using a complex system of free hanging precision weights, bell cranks, and/or other mechanical components. Conventional methods may provide sufficient accuracy in some instances, but are often quite complex and labor-intensive, requiring three to four man-weeks to complete each full calibration. To ensure accuracy, gravity-based loading is typically utilized. However, this often causes difficulty when applying loads in three simultaneous, orthogonal axes. A complex system of levers, cranks, and cables must be used, introducing increased sources of systematic error, and significantly increasing the time and labor intensity required to complete the calibration. One aspect of the VACS is a method wherein the mass utilized for calibration is held constant, and the acceleration is changed to thereby generate relatively large forces with relatively small test masses. Multiple forces can be applied to a force balance without changing the test mass, and dynamic forces can be applied by rotation or oscillating acceleration. If rotational motion is utilized, a mass is rigidly attached to a force balance, and the mass is exposed to a rotational field. A large force can be applied by utilizing a large rotational velocity. A centrifuge or rotating table can be used to create the rotational field, and fixtures can be utilized to position the force balance. The acceleration may also be linear. For example, a table that moves linearly and accelerates in a sinusoidal manner may also be utilized. The test mass does not have to move in a path that is parallel to the ground, and no re-leveling is therefore required. Balance deflection corrections may be applied passively by monitoring the orientation of the force balance with a three-axis accelerometer package. Deflections are measured during each test run, and adjustments with respect to the true applied load can be made during the post-processing stage. This paper will
Implementation of a multivariate regional index-flood model
NASA Astrophysics Data System (ADS)
Requena, Ana Isabel; Chebana, Fateh; Mediero, Luis; Garrote, Luis
2014-05-01
A multivariate flood frequency approach is required to obtain appropriate estimates of the design flood associated to a given return period, as the nature of floods is multivariate. A regional frequency analysis is usually conducted to procure estimates or reduce the corresponding uncertainty when no information is available at ungauged sites or a short record is observed at gauged sites. In the present study a multivariate regional methodology based on the index-flood model is presented, seeking to enrich and complete the existing methods by i) considering more general two-parameter copulas for simulating synthetic homogeneous regions to test homogeneity; ii) using the latest definitions of bivariate return periods for quantile estimation; and iii) applying recent procedures for the selection of a subset of bivariate design events from the wider quantile curves. A complete description of the selection processes of both marginal distributions and copula is also included. The proposed methodology provides an entire procedure focused on its practical application. The proposed methodology was applied to a case study located in the Ebro basin in the north of Spain. Series of annual maximum flow peaks (Q) and their associated hydrograph volumes (V ) were selected as flood variables. The initial region was divided into two homogeneous sub-regions by a cluster analysis and a multivariate homogeneity test. The Gumbel and Generalised Extreme Value distributions were selected as marginal distributions to fit the two flood variables. The BB1 copula was found to be the best regional copula for characterising the dependence relation between variables. The OR bivariate joint return period related to the (non-exceedance) probability of the event{Q ≤ qδ§ V ≤ v}was considered for quantile estimation. The index flood was based on the mean of the flood variables. Multiple linear regressions were used to estimate the index flood at ungauged sites. Basin concentration time
Zhou, Fei; Peng, Jiyu; Zhao, Yajing; Huang, Weisu; Jiang, Yirong; Li, Maiquan; Wu, Xiaodan; Lu, Baiyi
2017-02-15
This study was aimed to classify the varieties and predict the antioxidant activity of Osmanthus fragrans flowers by UPLC-PDA/QTOF-MS and multivariable analysis. The PLS-DA model successfully classified the four varieties based on both the 21 identified compounds and the effective compounds. For the antioxidant activity prediction, PLS performed well to predict the antioxidant activity of O. fragrans flowers. Furthermore, acteoside, suspensaside A, ligustroside, forsythoside A, phillygenin and caffeic acid were selected as effective compounds by UVE-SPA for prediction. On the basis of effective compounds, PLS, MLR and PCR were applied to establish the calibration models. The UVE-SPA-MLR model was the optimal method to predict the antioxidant activity values with Rp of 0.9200, 0.9010 and 0.8905 for DPPH, ABTS and FRAP assays, respectively. The results revealed that the UPLC-PDA/QTOF-MS combined with chemometrics could be a new method to classify the varieties and predict the antioxidant activity of O. fragrans flowers. PMID:27664663
John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster
2009-03-11
U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on
Calibration Matters: Advances in Strapdown Airborne Gravimetry
NASA Astrophysics Data System (ADS)
Becker, D.
2015-12-01
Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.
Multi-cameras calibration from spherical targets
NASA Astrophysics Data System (ADS)
Zhao, Chengyun; Zhang, Jin; Deng, Huaxia; Yu, Liandong
2016-01-01
Multi-cameras calibration using spheres is more convenient than using planar target because it has an obvious advantage in imaging in different angles. The internal and external parameters of multi-cameras can be obtained through once calibrat ion. In this paper, a novel mult i-cameras calibration method is proposed based on multiple spheres. A calibration target with fixed multiple balls is applied in this method and the geometric propert ies of the sphere projection model will be analyzed. During the experiment, the spherical target is placed in the public field of mult i-cameras system. Then the corresponding data can be stored when the cameras are triggered by signal generator. The contours of the balls are detected by Hough transform and the center coordinates are determined with sub-pixel accuracy. Then the center coordinates are used as input information for calibrat ion and the internal as well as external parameters can be calculated by Zhang's theory. When mult iple cameras are calibrated simultaneously from different angles using multiple spheres, the center coordinates of each sphere can be determined accurately even the target images taken out of focus. So this method can improve the calibration precision. Meanwhile, Zhang's plane template method is added to the contrast calibrat ion experiment. And the error sources of the experiment are analyzed. The results indicate that the method proposed in this paper is suitable for mult i-cameras calibrat ion.
PERSONALISED BODY COUNTER CALIBRATION USING ANTHROPOMETRIC PARAMETERS.
Pölz, S; Breustedt, B
2016-09-01
Current calibration methods for body counting offer personalisation for lung counting predominantly with respect to ratios of body mass and height. Chest wall thickness is used as an intermediate parameter. This work revises and extends these methods using a series of computational phantoms derived from medical imaging data in combination with radiation transport simulation and statistical analysis. As an example, the method is applied to the calibration of the In Vivo Measurement Laboratory (IVM) at Karlsruhe Institute of Technology (KIT) comprising four high-purity germanium detectors in two partial body measurement set-ups. The Monte Carlo N-Particle (MCNP) transport code and the Extended Cardiac-Torso (XCAT) phantom series have been used. Analysis of the computed sample data consisting of 18 anthropometric parameters and calibration factors generated from 26 photon sources for each of the 30 phantoms reveals the significance of those parameters required for producing an accurate estimate of the calibration function. Body circumferences related to the source location perform best in the example, while parameters related to body mass show comparable but lower performances, and those related to body height and other lengths exhibit low performances. In conclusion, it is possible to give more accurate estimates of calibration factors using this proposed approach including estimates of uncertainties related to interindividual anatomical variation of the target population. PMID:26396263
The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.
NASA Astrophysics Data System (ADS)
Iwata, Tetsuo; Yoshioka, Shuji; Nakamura, Shota; Mizutani, Yasuhiro; Yasui, Takeshi
2013-10-01
We applied a multivariate analysis method to time-domain (TD) data obtained in terahertz (THz) reflectometry for predicting the thickness of a single-layered paint film deposited on a metal substrate. For prediction purposes, we built a calibration model from TD-THz waveforms obtained from films of different thicknesses but the same kind. Because each TD-THz wave is approximate by the superposition of two echo pulses (one is reflected from the air-film boundary and the other from the film-substrate boundary), a difference in thickness gives a relative peak shift in time in the two echo pulses. Then, we predicted unknown thicknesses of the paint films by using the calibration model. Although any multivariate analysis method can be used, we proposed employing a modified partial-least-squares-1 (PLS1) method because it gives a superior calibration model in principle. The prediction procedure worked well for a moderately thin film (typically, several to several tens of micrometers) rather than a thicker one.
Nonlinear aerodynamic modeling using multivariate orthogonal functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
A technique was developed for global modeling of nonlinear aerodynamic coefficients using multivariate orthogonal functions based on the data. Each orthogonal function retained in the model was decomposed into an expansion of ordinary polynomials in the independent variables, so that the final model could be interpreted as selectively retained terms from a multivariable power series expansion. A predicted squared-error metric was used to determine the orthogonal functions to be retained in the model; analytical derivatives were easily computed. The approach was demonstrated on the Z-body axis aerodynamic force coefficient (Cz) wind tunnel data for an F-18 research vehicle which came from a tabular wind tunnel and covered the entire subsonic flight envelope. For a realistic case, the analytical model predicted experimental values of Cz very well. The modeling technique is shown to be capable of generating a compact, global analytical representation of nonlinear aerodynamics. The polynomial model has good predictive capability, global validity, and analytical differentiability.
Multivariate Approaches to Classification in Extragalactic Astronomy
NASA Astrophysics Data System (ADS)
Fraix-Burnet, Didier; Thuillard, Marc; Chattopadhyay, Asis Kumar
2015-08-01
Clustering objects into synthetic groups is a natural activity of any science. Astrophysics is not an exception and is now facing a deluge of data. For galaxies, the one-century old Hubble classification and the Hubble tuning fork are still largely in use, together with numerous mono- or bivariate classifications most often made by eye. However, a classification must be driven by the data, and sophisticated multivariate statistical tools are used more and more often. In this paper we review these different approaches in order to situate them in the general context of unsupervised and supervised learning. We insist on the astrophysical outcomes of these studies to show that multivariate analyses provide an obvious path toward a renewal of our classification of galaxies and are invaluable tools to investigate the physics and evolution of galaxies.
Multivariate temporal dictionary learning for EEG.
Barthélemy, Q; Gouy-Pailler, C; Isaac, Y; Souloumiac, A; Larue, A; Mars, J I
2013-04-30
This article addresses the issue of representing electroencephalographic (EEG) signals in an efficient way. While classical approaches use a fixed Gabor dictionary to analyze EEG signals, this article proposes a data-driven method to obtain an adapted dictionary. To reach an efficient dictionary learning, appropriate spatial and temporal modeling is required. Inter-channels links are taken into account in the spatial multivariate model, and shift-invariance is used for the temporal model. Multivariate learned kernels are informative (a few atoms code plentiful energy) and interpretable (the atoms can have a physiological meaning). Using real EEG data, the proposed method is shown to outperform the classical multichannel matching pursuit used with a Gabor dictionary, as measured by the representative power of the learned dictionary and its spatial flexibility. Moreover, dictionary learning can capture interpretable patterns: this ability is illustrated on real data, learning a P300 evoked potential.
The Calibration Reference Data System
NASA Astrophysics Data System (ADS)
Greenfield, P.; Miller, T.
2016-07-01
We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.
Application of two tests of multivariate discordancy to fisheries data sets
Stapanian, M.A.; Kocovsky, P.M.; Garner, F.C.
2008-01-01
The generalized (Mahalanobis) distance and multivariate kurtosis are two powerful tests of multivariate discordancies (outliers). Unlike the generalized distance test, the multivariate kurtosis test has not been applied as a test of discordancy to fisheries data heretofore. We applied both tests, along with published algorithms for identifying suspected causal variable(s) of discordant observations, to two fisheries data sets from Lake Erie: total length, mass, and age from 1,234 burbot, Lota lota; and 22 combinations of unique subsets of 10 morphometrics taken from 119 yellow perch, Perca flavescens. For the burbot data set, the generalized distance test identified six discordant observations and the multivariate kurtosis test identified 24 discordant observations. In contrast with the multivariate tests, the univariate generalized distance test identified no discordancies when applied separately to each variable. Removing discordancies had a substantial effect on length-versus-mass regression equations. For 500-mm burbot, the percent difference in estimated mass after removing discordancies in our study was greater than the percent difference in masses estimated for burbot of the same length in lakes that differed substantially in productivity. The number of discordant yellow perch detected ranged from 0 to 2 with the multivariate generalized distance test and from 6 to 11 with the multivariate kurtosis test. With the kurtosis test, 108 yellow perch (90.7%) were identified as discordant in zero to two combinations, and five (4.2%) were identified as discordant in either all or 21 of the 22 combinations. The relationship among the variables included in each combination determined which variables were identified as causal. The generalized distance test identified between zero and six discordancies when applied separately to each variable. Removing the discordancies found in at least one-half of the combinations (k=5) had a marked effect on a principal components
Sequencing human ribs into anatomical order by quantitative multivariate methods.
Cirillo, John; Henneberg, Maciej
2012-06-01
Little research has focussed on methods to anatomically sequence ribs. Correct anatomical sequencing of ribs assists in determining the location and distribution of regional trauma, age estimation, number of puncture wounds, number of individuals, and personal identification. The aim of the current study is to develop a method for placing fragmented and incomplete rib sets into correct anatomical position. Ribs 2-10 were used from eleven cadavers of an Australian population. Seven variables were measured from anatomical locations on the rib. General descriptive statistics were calculated for each variable along with an analysis of variance (ANOVA) and ANOVA with Bonferroni statistics. Considerable overlap was observed between ribs for univariate methods. Bivariate and multivariate methods were then applied. Results of the ANOVA with post hoc Bonferroni statistics show that ratios of various dimensions of a single rib could be used to sequence it within adjacent ribs. Using multiple regression formulae, the most accurate estimation of the anatomical rib number occurs when the entire rib is found in isolation. This however, is not always possible. Even when only the head and neck of the rib are preserved, a modified multivariate regression formula assigned 91.95% of ribs into correct anatomical position or as an adjacent rib. Using multivariate methods it is possible to sequence a single human rib with a high level of accuracy and they are superior to univariate methods. Left and right ribs were found to be highly symmetrical. Some rib dimensions were greater in males than in females, but overall the level of sexual dimorphism was low.
Multi-application controls: Robust nonlinear multivariable aerospace controls applications
NASA Technical Reports Server (NTRS)
Enns, Dale F.; Bugajski, Daniel J.; Carter, John; Antoniewicz, Bob
1994-01-01
This viewgraph presentation describes the general methodology used to apply Honywell's Multi-Application Control (MACH) and the specific application to the F-18 High Angle-of-Attack Research Vehicle (HARV) including piloted simulation handling qualities evaluation. The general steps include insertion of modeling data for geometry and mass properties, aerodynamics, propulsion data and assumptions, requirements and specifications, e.g. definition of control variables, handling qualities, stability margins and statements for bandwidth, control power, priorities, position and rate limits. The specific steps include choice of independent variables for least squares fits to aerodynamic and propulsion data, modifications to the management of the controls with regard to integrator windup and actuation limiting and priorities, e.g. pitch priority over roll, and command limiting to prevent departures and/or undesirable inertial coupling or inability to recover to a stable trim condition. The HARV control problem is characterized by significant nonlinearities and multivariable interactions in the low speed, high angle-of-attack, high angular rate flight regime. Systematic approaches to the control of vehicle motions modeled with coupled nonlinear equations of motion have been developed. This paper will discuss the dynamic inversion approach which explicity accounts for nonlinearities in the control design. Multiple control effectors (including aerodynamic control surfaces and thrust vectoring control) and sensors are used to control the motions of the vehicles in several degrees-of-freedom. Several maneuvers will be used to illustrate performance of MACH in the high angle-of-attack flight regime. Analytical methods for assessing the robust performance of the multivariable control system in the presence of math modeling uncertainty, disturbances, and commands have reached a high level of maturity. The structured singular value (mu) frequency response methodology is presented
Quality Reporting of Multivariable Regression Models in Observational Studies
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M.
2016-01-01
Abstract Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE. Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model. The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0–30.3) of the articles and 18.5% (95% CI: 14.8–22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor. A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature. PMID:27196467
The evolution of multivariate maternal effects.
Kuijper, Bram; Johnstone, Rufus A; Townley, Stuart
2014-04-01
There is a growing interest in predicting the social and ecological contexts that favor the evolution of maternal effects. Most predictions focus, however, on maternal effects that affect only a single character, whereas the evolution of maternal effects is poorly understood in the presence of suites of interacting traits. To overcome this, we simulate the evolution of multivariate maternal effects (captured by the matrix M) in a fluctuating environment. We find that the rate of environmental fluctuations has a substantial effect on the properties of M: in slowly changing environments, offspring are selected to have a multivariate phenotype roughly similar to the maternal phenotype, so that M is characterized by positive dominant eigenvalues; by contrast, rapidly changing environments favor Ms with dominant eigenvalues that are negative, as offspring favor a phenotype which substantially differs from the maternal phenotype. Moreover, when fluctuating selection on one maternal character is temporally delayed relative to selection on other traits, we find a striking pattern of cross-trait maternal effects in which maternal characters influence not only the same character in offspring, but also other offspring characters. Additionally, when selection on one character contains more stochastic noise relative to selection on other traits, large cross-trait maternal effects evolve from those maternal traits that experience the smallest amounts of noise. The presence of these cross-trait maternal effects shows that individual maternal effects cannot be studied in isolation, and that their study in a multivariate context may provide important insights about the nature of past selection. Our results call for more studies that measure multivariate maternal effects in wild populations.
Multivariate linear recurrences and power series division
Hauser, Herwig; Koutschan, Christoph
2012-01-01
Bousquet-Mélou and Petkovšek investigated the generating functions of multivariate linear recurrences with constant coefficients. We will give a reinterpretation of their results by means of division theorems for formal power series, which clarifies the structural background and provides short, conceptual proofs. In addition, extending the division to the context of differential operators, the case of recurrences with polynomial coefficients can be treated in an analogous way. PMID:23482936
A force calibration standard for magnetic tweezers
NASA Astrophysics Data System (ADS)
Yu, Zhongbo; Dulin, David; Cnossen, Jelmer; Köber, Mariana; van Oene, Maarten M.; Ordu, Orkide; Berghuis, Bojk A.; Hensgens, Toivo; Lipfert, Jan; Dekker, Nynke H.
2014-12-01
To study the behavior of biological macromolecules and enzymatic reactions under force, advances in single-molecule force spectroscopy have proven instrumental. Magnetic tweezers form one of the most powerful of these techniques, due to their overall simplicity, non-invasive character, potential for high throughput measurements, and large force range. Drawbacks of magnetic tweezers, however, are that accurate determination of the applied forces can be challenging for short biomolecules at high forces and very time-consuming for long tethers at low forces below ˜1 piconewton. Here, we address these drawbacks by presenting a calibration standard for magnetic tweezers consisting of measured forces for four magnet configurations. Each such configuration is calibrated for two commonly employed commercially available magnetic microspheres. We calculate forces in both time and spectral domains by analyzing bead fluctuations. The resulting calibration curves, validated through the use of different algorithms that yield close agreement in their determination of the applied forces, span a range from 100 piconewtons down to tens of femtonewtons. These generalized force calibrations will serve as a convenient resource for magnetic tweezers users and diminish variations between different experimental configurations or laboratories.
NASA Technical Reports Server (NTRS)
Sigman, E. H.
1988-01-01
A phase calibration system was developed for the Deep Space Stations to generate reference microwave comb tones which are mixed in with signals received by the antenna. These reference tones are used to remove drifts of the station's receiving system from the detected data. This phase calibration system includes a cable stabilizer which transfers a 20 MHz reference signal from the control room to the antenna cone. The cable stabilizer compensates for delay changes in the long cable which connects its control room subassembly to its antenna cone subassembly in such a way that the 20 MHz is transferred to the cone with no significant degradation of the hydrogen maser atomic clock stability. The 20 MHz reference is used by the comb generator and is also available for use as a reference for receiver LO's in the cone.
Environmental calibration chamber operations
NASA Technical Reports Server (NTRS)
Lester, D. L.
1988-01-01
Thermal vacuum capabilities are provided for the development, calibration, and functional operation checks of flight sensors, sources, and laboratory and field instruments. Two systems are available. The first is a 46 cm diameter diffusion pumped vacuum chambler of the bell jar variety. It has an internal thermal shroud, LN2 old trap, two viewing ports, and various electrical and fluid feedthroughs. The other, also an oil diffusion pumped system, consists of a 1.8 m diameter by 2.5 m long stainless steel vacuum tank, associated pumping and control equipment, a liquid nitrogen storage and transfer system and internal IR/visible calibration sources. This is a two story system with the chamber located on one floor and the pumping/cryogenic systems located on the floor below.
Calibrated vapor generator source
Davies, J.P.; Larson, R.A.; Goodrich, L.D.; Hall, H.J.; Stoddard, B.D.; Davis, S.G.; Kaser, T.G.; Conrad, F.J.
1995-09-26
A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet. 10 figs.
Calibrated vapor generator source
Davies, John P.; Larson, Ronald A.; Goodrich, Lorenzo D.; Hall, Harold J.; Stoddard, Billy D.; Davis, Sean G.; Kaser, Timothy G.; Conrad, Frank J.
1995-01-01
A portable vapor generator is disclosed that can provide a controlled source of chemical vapors, such as, narcotic or explosive vapors. This source can be used to test and calibrate various types of vapor detection systems by providing a known amount of vapors to the system. The vapor generator is calibrated using a reference ion mobility spectrometer. A method of providing this vapor is described, as follows: explosive or narcotic is deposited on quartz wool, placed in a chamber that can be heated or cooled (depending on the vapor pressure of the material) to control the concentration of vapors in the reservoir. A controlled flow of air is pulsed over the quartz wool releasing a preset quantity of vapors at the outlet.
Automatic volume calibration system
Gates, A.J.; Aaron, C.C.
1985-05-06
The Automatic Volume Calibration System presently consists of three independent volume-measurement subsystems and can possibly be expanded to five subsystems. When completed, the system will manually or automatically perform the sequence of valve-control and data-acquisition operations required to measure given volumes. An LSI-11 minicomputer controls the vacuum and pressure sources and controls solenoid control valves to open and close various volumes. The input data are obtained from numerous displacement, temperature, and pressure sensors read by the LSI-11. The LSI-11 calculates the unknown volume from the data acquired during the sequence of valve operations. The results, based on the Ideal Gas Law, also provide information for feedback and control. This paper describes the volume calibration system, its subsystems, and the integration of the various instrumentation used in the system's design and development. 11 refs., 13 figs., 4 tabs.
Fast calibration of gas flowmeters
NASA Technical Reports Server (NTRS)
Lisle, R. V.; Wilson, T. L.
1981-01-01
Digital unit automates calibration sequence using calculator IC and programmable read-only memory to solve calibration equations. Infrared sensors start and stop calibration sequence. Instrument calibrates mass flowmeters or rotameters where flow measurement is based on mass or volume. This automatic control reduces operator time by 80 percent. Solid-state components are very reliable, and digital character allows system accuracy to be determined primarily by accuracy of transducers.
Water quality change detection: multivariate algorithms
NASA Astrophysics Data System (ADS)
Klise, Katherine A.; McKenna, Sean A.
2006-05-01
In light of growing concern over the safety and security of our nation's drinking water, increased attention has been focused on advanced monitoring of water distribution systems. The key to these advanced monitoring systems lies in the combination of real time data and robust statistical analysis. Currently available data streams from sensors provide near real time information on water quality. Combining these data streams with change detection algorithms, this project aims to develop automated monitoring techniques that will classify real time data and denote anomalous water types. Here, water quality data in 1 hour increments over 3000 hours at 4 locations are used to test multivariate algorithms to detect anomalous water quality events. The algorithms use all available water quality sensors to measure deviation from expected water quality. Simulated anomalous water quality events are added to the measured data to test three approaches to measure this deviation. These approaches include multivariate distance measures to 1) the previous observation, 2) the closest observation in multivariate space, and 3) the closest cluster of previous water quality observations. Clusters are established using kmeans classification. Each approach uses a moving window of previous water quality measurements to classify the current measurement as normal or anomalous. Receiver Operating Characteristic (ROC) curves test the ability of each approach to discriminate between normal and anomalous water quality using a variety of thresholds and simulated anomalous events. These analyses result in a better understanding of the deviation from normal water quality that is necessary to sound an alarm.
Mesoscale hybrid calibration artifact
Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.
2010-09-07
A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.
NASA Astrophysics Data System (ADS)
Lorefice, Salvatore; Malengo, Andrea
2006-10-01
After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.
H. H. Liu
2003-02-14
This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.
T. Ghezzehej
2004-10-04
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.
Traceable periodic force calibration
NASA Astrophysics Data System (ADS)
Schlegel, Ch; Kieckenap, G.; Glöckner, B.; Buß, A.; Kumme, R.
2012-06-01
A procedure for dynamic force calibration using sinusoidal excitations of force transducers is described. The method is based on a sinusoidal excitation of force transducers equipped with an additional top mass excited with an electrodynamic shaker system. The acting dynamic force can in this way be determined according to Newton's law as mass times acceleration, whereby the acceleration is measured on the surface of the top mass with the aid of laser interferometers. The dynamic sensitivity, which is the ratio of the electrical output signal of the force transducer and the acting dynamic force, is the main point of interest of such a dynamic calibration. In addition to the sensitivity, the parameter stiffness and damping of the transducer can also be determined. The first part of the paper outlines a mathematical model to describe the dynamic behaviour of a transducer. This is followed by a presentation of the traceability of the measured quantities involved and their uncertainties. The paper finishes with an example calibration of a 25 kN strain gauge force transducer.
NASA Technical Reports Server (NTRS)
1997-01-01
Several prominent features of Mars Pathfinder and surrounding terrain are seen in this image, taken by the Imager for Mars Pathfinder on July 4 (Sol 1), the spacecraft's first day on the Red Planet. Portions of a lander petal are at the lower part of the image. At the left, the mechanism for the high-gain antenna can be seen. The dark area along the right side of the image represents a portion of the low-gain antenna. The radiation calibration target is at the right. The calibration target is made up of a number of materials with well-characterized colors. The known colors of the calibration targets allow scientists to determine the true colors of the rocks and soils of Mars. Three bull's-eye rings provide a wide range of brightness for the camera, similar to a photographer's grayscale chart. In the middle of the bull's-eye is a 5-inch tall post that casts a shadow, which is distorted in this image due to its location with respect to the lander camera.
A large rock is located at the near center of the image. Smaller rocks and areas of soil are strewn across the Martian terrain up to the horizon line.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C.
Improved dewpoint-probe calibration
NASA Technical Reports Server (NTRS)
Stephenson, J. G.; Theodore, E. A.
1978-01-01
Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.
Dynamic Calibration of Pressure Transducers
NASA Technical Reports Server (NTRS)
Hess, R. W.; Davis, W. T.; Davis, P. A.
1985-01-01
Sinusoidal calibration signal produced in 4- to 100-Hz range. Portable oscillating-pressure device measures dynamic characteristics of pressure transducers installed in models or aircraft at frequency and oscillating-pressure ranges encountered during unsteady-pressure-measurement tests. Calibration is over range of frequencies and amplitudes not available with commercial acoustic calibration devices.
Adjustment of ocean color sensor calibration through multi-band statistics.
Stumpf, Richard P; Werdell, P Jeremy
2010-01-18
The band-by-band vicarious calibration of on-orbit satellite ocean color instruments, such as SeaWiFS and MODIS, using ground-based measurements has significant residual uncertainties. This paper applies spectral shape and population statistics to tune the calibration of the blue bands against each other to allow examination of the interband calibration and potentially provide an analysis of calibration trends. This adjustment does not require simultaneous matches of ground and satellite observations. The method demonstrates the spectral stability of the SeaWiFS calibration and identifies a drift in the MODIS instrument onboard Aqua that falls within its current calibration uncertainties.
Internet-based calibration of a multifunction calibrator
BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK
2000-04-17
A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multifunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.
Internet-Based Calibration of a Multifunction Calibrator
BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK
2000-12-19
A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multijunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.
Overall, J E; Atlas, R S
1999-04-01
The power of univariate and multivariate tests of significance is compared in relation to linear and nonlinear patterns of treatment effects in a repeated measurement design. Bonferroni correction was used to control the experiment-wise error rate in combining results from univariate tests of significance accomplished separately on average level, linear, quadratic, and cubic trend components. Multivariate tests on these same components of the overall treatment effect, as well as a multivariate test for between-groups difference on the original repeated measurements, were also evaluated for power against the same representative patterns of treatment effects. Results emphasize the advantage of parsimony that is achieved by transforming multiple repeated measurements into a reduced set of mean ngful composite variables representing average levels and rates of change. The Bonferroni correction applied to the separate univariate tests provided experiment-wise protection against Type I error, produced slightly greater experiment-wise power than a multivariate test applied to the same components of the data patterns, and provided substantially greater power than a multivariate test on the complete set of original repeated measurements. The separate univariate tests provide interpretive advantage regarding locus of the treatment effects. PMID:10348408
A Multivariate Granger Causality Concept towards Full Brain Functional Connectivity.
Schmidt, Christoph; Pester, Britta; Schmid-Hertel, Nicole; Witte, Herbert; Wismüller, Axel; Leistritz, Lutz
2016-01-01
Detecting changes of spatially high-resolution functional connectivity patterns in the brain is crucial for improving the fundamental understanding of brain function in both health and disease, yet still poses one of the biggest challenges in computational neuroscience. Currently, classical multivariate Granger Causality analyses of directed interactions between single process components in coupled systems are commonly restricted to spatially low- dimensional data, which requires a pre-selection or aggregation of time series as a preprocessing step. In this paper we propose a new fully multivariate Granger Causality approach with embedded dimension reduction that makes it possible to obtain a representation of functional connectivity for spatially high-dimensional data. The resulting functional connectivity networks may consist of several thousand vertices and thus contain more detailed information compared to connectivity networks obtained from approaches based on particular regions of interest. Our large scale Granger Causality approach is applied to synthetic and resting state fMRI data with a focus on how well network community structure, which represents a functional segmentation of the network, is preserved. It is demonstrated that a number of different community detection algorithms, which utilize a variety of algorithmic strategies and exploit topological features differently, reveal meaningful information on the underlying network module structure.
A Multivariate Granger Causality Concept towards Full Brain Functional Connectivity
Schmid-Hertel, Nicole; Witte, Herbert; Wismüller, Axel; Leistritz, Lutz
2016-01-01
Detecting changes of spatially high-resolution functional connectivity patterns in the brain is crucial for improving the fundamental understanding of brain function in both health and disease, yet still poses one of the biggest challenges in computational neuroscience. Currently, classical multivariate Granger Causality analyses of directed interactions between single process components in coupled systems are commonly restricted to spatially low- dimensional data, which requires a pre-selection or aggregation of time series as a preprocessing step. In this paper we propose a new fully multivariate Granger Causality approach with embedded dimension reduction that makes it possible to obtain a representation of functional connectivity for spatially high-dimensional data. The resulting functional connectivity networks may consist of several thousand vertices and thus contain more detailed information compared to connectivity networks obtained from approaches based on particular regions of interest. Our large scale Granger Causality approach is applied to synthetic and resting state fMRI data with a focus on how well network community structure, which represents a functional segmentation of the network, is preserved. It is demonstrated that a number of different community detection algorithms, which utilize a variety of algorithmic strategies and exploit topological features differently, reveal meaningful information on the underlying network module structure. PMID:27064897
Fraud detection in medicare claims: A multivariate outlier detection approach
Burr, T.; Hale, C.; Kantor, M.
1997-04-01
We apply traditional and customized multivariate outlier detection methods to detect fraud in medicare claims. We use two sets of 11 derived features, and one set of the 22 combined features. The features are defined so that fraudulent medicare providers should tend to have larger features values than non-fraudulent providers. Therefore we have an apriori direction ({open_quotes}large values{close_quotes}) in high dimensional feature space to search for the multivariate outliers. We focus on three issues: (1) outlier masking (Example: the presence of one outlier can make it difficult to detect a second outlier), (2) the impact of having an apriori direction to search for fraud, and (3) how to compare our detection methods. Traditional methods include Mahalanobis distances, (with and without dimension reduction), k-nearest neighbor, and density estimation methods. Some methods attempt to mitigate the outlier masking problem (for example: minimum volume ellipsoid covariance estimator). Customized methods include ranking methods (such as Spearman rank ordering) that exploit the {open_quotes}large is suspicious{close_quotes} notion. No two methods agree completely which providers are most suspicious so we present ways to compare our methods. One comparison method uses a list of known-fraudulent providers. All comparison methods restrict attention to the most suspicious providers.
MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)
Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...
Rotation in the Dynamic Factor Modeling of Multivariate Stationary Time Series.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
2001-01-01
Proposes a special rotation procedure for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white noise, into a univariate moving-average.…
NASA Astrophysics Data System (ADS)
Haas, Marcelo B.; Guse, Björn; Pfannerstill, Matthias; Fohrer, Nicola
2016-05-01
Hydrological models are useful tools to investigate hydrology and water quality in catchments. The calibration of these models is a crucial step to adapt the model to the catchment conditions, allowing effective simulations of environmental processes. In the model calibration, different performance measures need to be considered to represent different hydrology and water quality conditions in combination. This study presents a joined multi-metric calibration of discharge and nitrate loads simulated with the ecohydrological model SWAT. For this purpose, a calibration approach based on flow duration curves (FDC) is advanced by also considering nitrate duration curves (NDC). Five segments of FDCs and of NDCs are evaluated separately to consider the different phases of hydrograph and nitrograph. To consider both magnitude and dynamics in river discharge and nitrate loads, the Kling-Gupta Efficiency (KGE) is used additionally as a statistical performance metric to achieve a joined multi-variable calibration. The results show that a separate assessment of five different magnitudes improves the calibrated nitrate loads. Subsequently, adequate model runs with good performance for different hydrological conditions both for discharge and nitrate are detected in a joined approach based on FDC, NDC, and KGE. In that manner, plausible results were obtained for discharge and nitrate loads in the same model run. Using a multi-metric performance approach, the simultaneous multi-variable calibration led to a balanced model result for all magnitudes of discharge and nitrate loads.
Calibration of triaxial fluxgate gradiometer
Vcelak, Jan
2006-04-15
The description of simple and fast calibration procedures used for double-probe triaxial fluxgate gradiometer is provided in this paper. The calibration procedure consists of three basic steps. In the first step both probes are calibrated independently in order to reach constant total field reading in every position. Both probes are numerically aligned in the second step in order that the gradient reading is zero in homogenous magnetic field. The third step consists of periodic drift calibration during measurement. The results and detailed description of each calibration step are presented and discussed in the paper. The gradiometer is finally verified during the detection of the metal object in the measuring grid.
On a Family of Multivariate Modified Humbert Polynomials
Aktaş, Rabia; Erkuş-Duman, Esra
2013-01-01
This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411
John Schabron; Joseph Rovani; Mark Sanderson
2008-02-29
Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The
Development and calibration of a pedal with force and moment sensors.
Gurgel, Jonas; Porto, Flávia; Russomano, Thais; Cambraia, Rodrigo; de Azevedo, Dario F G; Glock, Flávio S; Beck, João Carlos Pinheiro; Helegda, Sergio
2006-01-01
An instrumented bicycle pedal was built and calibrated. The pedal has good linearity and sensibility, comparable to other instruments in the literature. This study aimed to perform accurate calibration of a tri-axial pedal, including forces applied, deformations, nonlinearities, hysteresis and standard error for each axis. Calibration was based on Hull and Davis method, which is based on the application of known loads on the pedal in order to create a calibration matrix. PMID:17946605
Heigl, N; Huck, C W; Rainer, M; Najam-Ul-Haq, M; Bonn, G K
2006-07-01
A method based on near-infrared spectroscopy (NIRS) was developed for the rapid and non-destructive determination and quantification of solid and dissolved amino acids. The statistical results obtained after optimisation of measurement conditions were evaluated on the basis of statistical parameters, Q-value (quality of calibrations), R(2), standard error of estimation (SEE), standard error of prediction (SEP), BIAS applying cluster and different multivariate analytical procedures. Experimental optimisation comprised the selection of the highest suitable optical thin-layer (0.5, 1.0, 1.5, 2.0, 2.5, 3.0 mm), sample temperature (10-30 degrees C), measurement option (light fibre, 0.5 mm optical thin-layer; boiling point tube; different types of cuvettes) and sample concentration in the range between 100 and 500 ppm. Applying the optimised conditions and a 115-QS Suprasil cuvette (V = 400 microl), the established qualitative model enabled to distinguish between different dissolved amino acids with a Q-value of 0.9555. Solid amino acids were investigated in the transflectance mode, allowing to differentiate them with a Q-value of 0.9155. For the qualitative and quantitative analysis of amino acids in complex matrices NIRS was established as a detection system directly onto the plate after prior separation on cellulose based thin-layer chromatography (TLC) sheets employing n-butanol, acetic acid and distilled water at a ratio of 8:4:2 (v/v/v) as an optimised mobile phase. Due to the prior separation step, the established calibration curve was found to be more stable than the one calculated from the dissolved amino acids. The found lower limit of detection was 0.01 mg/ml. Finally, this optimised TLC-NIRS method was successfully applied for the qualitative and quantitative analysis of L-lysine in apple juice. NIRS is shown not only to offer a fast, non-destructive detection tool but also to provide an easy-to-use alternative to more complicated detection methods such as
Self-Calibrating Pressure Transducer
NASA Technical Reports Server (NTRS)
Lueck, Dale E. (Inventor)
2006-01-01
A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.
NASA Astrophysics Data System (ADS)
Boué-Bigne, Fabienne
2016-05-01
Laser induced breakdown spectroscopy (LIBS) scanning measurements can generally be used to detect the presence of non-metallic inclusions in steel samples. However, the inexistence of appropriate standards to calibrate the LIBS instrument signal means that its application is limited to identifying simple diatomic inclusions and inclusions that are chemically fully distinct from one another. Oxide inclusions in steel products have varied and complex chemical content, with an approximate size of interest of 1 μm. Several oxide inclusions types have chemical elements in common, but it is the concentration of these elements that makes an inclusion type have little or, on the contrary, deleterious impact on the final steel product quality. During the LIBS measurement of such inclusions, the spectroscopic signal is influenced not only by the inclusions' chemical concentrations but also by their varying size and associated laser ablation matrix effects. To address the complexity of calibrating the LIBS instrument signal for identifying such inclusion species, a new approach was developed where a calibration dataset was created, combining the elemental concentrations of typical oxide inclusions with the associated LIBS signal, in order to define a multivariate discriminant function capable of identifying oxide inclusions from LIBS data obtained from the measurement of unknown samples. The new method was applied to a variety of steel product samples. Inclusions populations consisting of mixtures of several complex oxides, with overlapping chemical content and size ranging typically from 1 to 5 μm, were identified and correlated well with validation data. The ability to identify complex inclusion types from LIBS data could open the way to new applications as, for a given sample area, the LIBS measurement is performed in a fraction of the time required by scanning electron microscopy, which is the conventional technique used for inclusion characterisation in steel
Yu, Hua; Small, Gary W
2015-02-01
A diagnostic and updating strategy is explored for multivariate calibrations based on near-infrared spectroscopy. For use with calibration models derived from spectral fitting or decomposition techniques, the proposed method constructs models that relate the residual concentrations remaining after a prediction to the residual spectra remaining after the information associated with the calibration model has been extracted. This residual modeling approach is evaluated for use with partial least-squares (PLS) models for predicting physiological levels of glucose in a simulated biological matrix. Residual models are constructed with both PLS and a hybrid technique based on the use of PLS scores as inputs to support vector regression. Calibration and residual models are built with both absorbance and single-beam data collected over 416 days. Effective models for the spectral residuals are built with both types of data and demonstrate the ability to diagnose and correct deviations in performance of the calibration model with time. PMID:25473807
[Laser-based radiometric calibration].
Li, Zhi-gang; Zheng, Yu-quan
2014-12-01
Increasingly higher demands are put forward to spectral radiometric calibration accuracy and the development of new tunable laser based spectral radiometric calibration technology is promoted, along with the development of studies of terrestrial remote sensing, aeronautical and astronautical remote sensing, plasma physics, quantitative spectroscopy, etc. Internationally a number of national metrology scientific research institutes have built tunable laser based spectral radiometric calibration facilities in succession, which are traceable to cryogenic radiometers and have low uncertainties for spectral responsivity calibration and characterization of detectors and remote sensing instruments in the UK, the USA, Germany, etc. Among them, the facility for spectral irradiance and radiance responsivity calibrations using uniform sources (SIRCCUS) at the National Institute of Standards and Technology (NIST) in the USA and the Tunable Lasers in Photometry (TULIP) facility at the Physikalisch-Technische Bundesanstalt (PTB) in Germany have more representatives. Compared with lamp-monochromator systems, laser based spectral radiometric calibrations have many advantages, such as narrow spectral bandwidth, high wavelength accuracy, low calibration uncertainty and so on for radiometric calibration applications. In this paper, the development of laser-based spectral radiometric calibration and structures and performances of laser-based radiometric calibration facilities represented by the National Physical Laboratory (NPL) in the UK, NIST and PTB are presented, technical advantages of laser-based spectral radiometric calibration are analyzed, and applications of this technology are further discussed. Laser-based spectral radiometric calibration facilities can be widely used in important system-level radiometric calibration measurements with high accuracy, including radiance temperature, radiance and irradiance calibrations for space remote sensing instruments, and promote the
[Laser-based radiometric calibration].
Li, Zhi-gang; Zheng, Yu-quan
2014-12-01
Increasingly higher demands are put forward to spectral radiometric calibration accuracy and the development of new tunable laser based spectral radiometric calibration technology is promoted, along with the development of studies of terrestrial remote sensing, aeronautical and astronautical remote sensing, plasma physics, quantitative spectroscopy, etc. Internationally a number of national metrology scientific research institutes have built tunable laser based spectral radiometric calibration facilities in succession, which are traceable to cryogenic radiometers and have low uncertainties for spectral responsivity calibration and characterization of detectors and remote sensing instruments in the UK, the USA, Germany, etc. Among them, the facility for spectral irradiance and radiance responsivity calibrations using uniform sources (SIRCCUS) at the National Institute of Standards and Technology (NIST) in the USA and the Tunable Lasers in Photometry (TULIP) facility at the Physikalisch-Technische Bundesanstalt (PTB) in Germany have more representatives. Compared with lamp-monochromator systems, laser based spectral radiometric calibrations have many advantages, such as narrow spectral bandwidth, high wavelength accuracy, low calibration uncertainty and so on for radiometric calibration applications. In this paper, the development of laser-based spectral radiometric calibration and structures and performances of laser-based radiometric calibration facilities represented by the National Physical Laboratory (NPL) in the UK, NIST and PTB are presented, technical advantages of laser-based spectral radiometric calibration are analyzed, and applications of this technology are further discussed. Laser-based spectral radiometric calibration facilities can be widely used in important system-level radiometric calibration measurements with high accuracy, including radiance temperature, radiance and irradiance calibrations for space remote sensing instruments, and promote the
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
Analyzing Multivariate Repeated Measures Designs When Covariance Matrices Are Heterogeneous.
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
Methods for the analysis of within-subjects effects in multivariate groups by trials repeated measures designs are considered in the presence of heteroscedasticity of the group variance-covariance matrices and multivariate nonnormality. Under a doubly multivariate model approach to hypothesis testing, within-subjects main and interaction effect…
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
ALTEA: The instrument calibration
NASA Astrophysics Data System (ADS)
Zaconte, V.; Belli, F.; Bidoli, V.; Casolino, M.; di Fino, L.; Narici, L.; Picozza, P.; Rinaldi, A.; Sannita, W. G.; Finetti, N.; Nurzia, G.; Rantucci, E.; Scrimaglio, R.; Segreto, E.; Schardt, D.
2008-05-01
The ALTEA program is an international and multi-disciplinary project aimed at studying particle radiation in space environment and its effects on astronauts’ brain functions, as the anomalous perception of light flashes first reported during Apollo missions. The ALTEA space facility includes a 6-silicon telescopes particle detector, and is onboard the International Space Station (ISS) since July 2006. In this paper, the detector calibration at the heavy-ion synchrotron SIS18 at GSI Darmstadt will be presented and compared to the Geant 3 Monte Carlo simulation. Finally, the results of a neural network analysis that was used for ion discrimination on fragmentation data will also be presented.
Multivariate Analysis of Genotype-Phenotype Association.
Mitteroecker, Philipp; Cheverud, James M; Pavlicev, Mihaela
2016-04-01
With the advent of modern imaging and measurement technology, complex phenotypes are increasingly represented by large numbers of measurements, which may not bear biological meaning one by one. For such multivariate phenotypes, studying the pairwise associations between all measurements and all alleles is highly inefficient and prevents insight into the genetic pattern underlying the observed phenotypes. We present a new method for identifying patterns of allelic variation (genetic latent variables) that are maximally associated-in terms of effect size-with patterns of phenotypic variation (phenotypic latent variables). This multivariate genotype-phenotype mapping (MGP) separates phenotypic features under strong genetic control from less genetically determined features and thus permits an analysis of the multivariate structure of genotype-phenotype association, including its dimensionality and the clustering of genetic and phenotypic variables within this association. Different variants of MGP maximize different measures of genotype-phenotype association: genetic effect, genetic variance, or heritability. In an application to a mouse sample, scored for 353 SNPs and 11 phenotypic traits, the first dimension of genetic and phenotypic latent variables accounted for >70% of genetic variation present in all 11 measurements; 43% of variation in this phenotypic pattern was explained by the corresponding genetic latent variable. The first three dimensions together sufficed to account for almost 90% of genetic variation in the measurements and for all the interpretable genotype-phenotype association. Each dimension can be tested as a whole against the hypothesis of no association, thereby reducing the number of statistical tests from 7766 to 3-the maximal number of meaningful independent tests. Important alleles can be selected based on their effect size (additive or nonadditive effect on the phenotypic latent variable). This low dimensionality of the genotype-phenotype map
Time varying, multivariate volume data reduction
Ahrens, James P; Fout, Nathaniel; Ma, Kwan - Liu
2010-01-01
Large-scale supercomputing is revolutionizing the way science is conducted. A growing challenge, however, is understanding the massive quantities of data produced by large-scale simulations. The data, typically time-varying, multivariate, and volumetric, can occupy from hundreds of gigabytes to several terabytes of storage space. Transferring and processing volume data of such sizes is prohibitively expensive and resource intensive. Although it may not be possible to entirely alleviate these problems, data compression should be considered as part of a viable solution, especially when the primary means of data analysis is volume rendering. In this paper we present our study of multivariate compression, which exploits correlations among related variables, for volume rendering. Two configurations for multidimensional compression based on vector quantization are examined. We emphasize quality reconstruction and interactive rendering, which leads us to a solution using graphics hardware to perform on-the-fly decompression during rendering. In this paper we present a solution which addresses the need for data reduction in large supercomputing environments where data resulting from simulations occupies tremendous amounts of storage. Our solution employs a lossy encoding scheme to acrueve data reduction with several options in terms of rate-distortion behavior. We focus on encoding of multiple variables together, with optional compression in space and time. The compressed volumes can be rendered directly with commodity graphics cards at interactive frame rates and rendering quality similar to that of static volume renderers. Compression results using a multivariate time-varying data set indicate that encoding multiple variables results in acceptable performance in the case of spatial and temporal encoding as compared to independent compression of variables. The relative performance of spatial vs. temporal compression is data dependent, although temporal compression has the
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
A Pattern Mining Approach for Classifying Multivariate Temporal Data
Batal, Iyad; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos
2012-01-01
We study the problem of learning classification models from complex multivariate temporal data encountered in electronic health record systems. The challenge is to define a good set of features that are able to represent well the temporal aspect of the data. Our method relies on temporal abstractions and temporal pattern mining to extract the classification features. Temporal pattern mining usually returns a large number of temporal patterns, most of which may be irrelevant to the classification task. To address this problem, we present the minimal predictive temporal patterns framework to generate a small set of predictive and non-spurious patterns. We apply our approach to the real-world clinical task of predicting patients who are at risk of developing heparin induced thrombocytopenia. The results demonstrate the benefit of our approach in learning accurate classifiers, which is a key step for developing intelligent clinical monitoring systems. PMID:22267987
Application of glyph-based techniques for multivariate engineering visualization
NASA Astrophysics Data System (ADS)
Glazar, Vladimir; Marunic, Gordana; Percic, Marko; Butkovic, Zlatko
2016-01-01
This article presents a review of glyph-based techniques for engineering visualization as well as practical application for the multivariate visualization process. Two glyph techniques, Chernoff faces and star glyphs, uncommonly used in engineering practice, are described, applied to the selected data set, run through the chosen optimization methods and user evaluated. As an example of how these techniques function, a set of data for the optimization of a heat exchanger with a microchannel coil is adopted for visualization. The results acquired by the chosen visualization techniques are related to the results of optimization carried out by the response surface method and compared with the results of user evaluation. Based on the data set from engineering research and practice, the advantages and disadvantages of these techniques for engineering visualization are identified and discussed.
Multivariate curve-fitting in GAUSS
Bunck, C.M.; Pendleton, G.W.
1988-01-01
Multivariate curve-fitting techniques for repeated measures have been developed and an interactive program has been written in GAUSS. The program implements not only the one-factor design described in Morrison (1967) but also includes pairwise comparisons of curves and rates, a two-factor design, and other options. Strategies for selecting the appropriate degree for the polynomial are provided. The methods and program are illustrated with data from studies of the effects of environmental contaminants on ducklings, nesting kestrels and quail.
Multivariate Lipschitz optimization: Survey and computational comparison
Hansen, P.; Gourdin, E.; Jaumard, B.
1994-12-31
Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.
Algorithms for computing the multivariable stability margin
NASA Technical Reports Server (NTRS)
Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.
1989-01-01
Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.
Robotti, Elisa; Marengo, Emilio
2016-01-01
2-D gel electrophoresis usually provides complex maps characterized by a low reproducibility: this hampers the use of spot volume data for the identification of reliable biomarkers. Under these circumstances, effective and robust methods for the comparison and classification of 2-D maps are fundamental for the identification of an exhaustive panel of candidate biomarkers. Multivariate methods are the most suitable since they take into consideration the relationships between the variables, i.e., effects of synergy and antagonism between the spots. Here the most common multivariate methods used in spot volume datasets analysis are presented. The methods are applied on a sample dataset to prove their effectiveness.
A multivariate analysis approach for the Imaging Atmospheric Cerenkov Telescopes System H.E.S.S
Dubois, F.; Lamanna, G.
2008-12-24
We present a multivariate classification approach applied to the analysis of data from the H.E.S.S. Very High Energy (VHE){gamma}-ray IACT stereoscopic system. This approach combines three complementary analysis methods already successfully applied in the H.E.S.S. data analysis. The proposed approach, with the combined effective estimator X{sub eff}, is conceived to improve the signal-to-background ratio and therefore particularly relevant to the morphological studies of faint extended sources.
Fabrication and calibration of sensitively photoelastic biocompatible gelatin spheres
NASA Astrophysics Data System (ADS)
Fu, Henry; Ceniceros, Ericson; McCormick, Zephyr
2013-11-01
Photoelastic gelatin can be used to measure forces generated by organisms in complex environments. We describe manufacturing, storage, and calibration techniques for sensitive photoelastic gelatin spheres to be used in aqueous environments. Calibration yields a correlation between photoelastic signal and applied force to be used in future studies. Images for calibration were collected with a digital camera attached to a linear polariscope. The images were then processed in Matlab to determine the photoelastic response of each sphere. The effect of composition, gelatin concentration, glycerol concentration, sphere size, and temperature were all examined for their effect on signal response. The minimum detectable force and the repeatability of our calibration technique were evaluated for the same sphere, different spheres from the same fabrication batch, and spheres from different batches. The minimum force detectable is 10 μN or less depending on sphere size. Factors which significantly contribute to errors in the calibration were explored in detail and minimized.
Consequences of Secondary Calibrations on Divergence Time Estimates
Schenk, John J.
2016-01-01
Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates. PMID:26824760
Chaloosi, Marzieh; Asadollahi, Seyed Azadeh; Khanchi, Ali Reza; FirozZare, Mahmoud; Mahani, Mohamad Khayatzadeh
2009-01-01
A partial least-squares (PLS) calibration model was developed for simultaneous multicomponent elemental analysis with inductively coupled plasma-atomic emission spectrometry (ICP-AES) in the presence of spectral interference. The best calibration model was obtained using a PLS2 algorithm. Validation was performed with an artificial test set. Multivariate calibration models were constructed using 2 series of synthetic mixtures (Zn, Cu, Fe, and U, V). Accuracy of the method was evaluated with unknown synthetic and real samples. PMID:19382589
Use of Radiometrically Calibrated Flat-Plate Calibrators in Calibration of Radiation Thermometers
NASA Astrophysics Data System (ADS)
Cárdenas-García, D.; Méndez-Lango, E.
2015-08-01
Most commonly used, low-temperature, infrared thermometers have large fields of view sizes that make them difficult to be calibrated with narrow aperture blackbodies. Flat-plate calibrators with large emitting surfaces have been proposed for calibrating these infrared thermometers. Because the emissivity of the flat plate is not unity, its radiance temperature is wavelength dependent. For calibration, the wavelength pass band of the device under test should match that of the reference infrared thermometer. If the device under test and reference radiometer have different pass bands, then it is possible to calculate the corresponding correction if the emissivity of the flat plate is known. For example, a correction of at is required when calibrating a infrared thermometer with a "" radiometrically calibrated flat-plate calibrator. A method is described for using a radiometrically calibrated flat-plate calibrator that covers both cases of match and mismatch working wavelength ranges of a reference infrared thermometer and infrared thermometers to be calibrated with the flat-plate calibrator. Also, an application example is included in this paper.
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
NASA Technical Reports Server (NTRS)
Soeder, J. F.
1983-01-01
As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.
Calibration and validation of rockfall models
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.
2013-04-01
actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.
NASA Technical Reports Server (NTRS)
Bruegge, Carol J.; Diner, David J.; Duval, Valerie G.
1996-01-01
The Multiangle Imaging SpectroRadiometer (MISR) is currently under development for NASA's Earth Observing System. The instrument consists of nine pushbroom cameras, each with four spectral bands in the visible and near-infrared. The cameras point in different view directions to provide measurements from nadir to highly oblique view angles in the along-track plane. Multiple view-angle observations provide a unique resource for studies of clouds, aerosols, and the surface. MISR is built to challenging radiometric and geometric performance specifications. Radiometric accuracy, for example, must be within +/- 3%/ 1 sigma, and polarization insensitivity must be better than +/- 1 %. An onboard calibrator (OBC) provides monthly updates to the instrument gain coefficients. Spectralon diffuse panels are used within the OBC to provide a uniform target for the cameras to view. The absolute radiometric scale is established both preflight and in orbit through the use of detector standards. During the mission, ground data processing to accomplish radiometric calibration, geometric rectification and registration of the nine view-angle imagery, and geophysical retrievals will proceed in an automated fashion. A global dataset is produced every 9 days. This paper details the preflight characterization of the MISR instrument, the design of the OBC, and the radiance product processing.
A Simple Accelerometer Calibrator
NASA Astrophysics Data System (ADS)
Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal
2016-08-01
High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.
NASA Astrophysics Data System (ADS)
Leming, Edward; SNO+ Collaboration
2015-04-01
Situated 2 km underground in Sudbury, Northern Ontario, the SNO + detector consists of an acrylic sphere 12 m in diameter containing 780 tons of target mass, surrounded by approximately 9,500 PMTs. For SNO, this target mass was heavy water, however the change to SNO + is defined by the change of this target mass to a novel scintillator. With the lower energy threshold, low intrinsic radioactivity levels and the best shielding against muons and cosmogenic activation of all existing neutrino experiments, SNO + will be sensitive to exciting new physics. The experiment will be studying solar, reactor, super nova and geo-neutrinos, though the main purpose of SNO + is the search for neutrinoless double-beta decay of Te-130. To meet the requirements imposed by the physics on detector performance, a detailed optical calibration is needed. Source deployment must be kept to a minimum and eliminated if possible, in order to meet the stringent radiopurity requirements. This led to the development of the Embedded LED/laser Light Injection Entity (ELLIE) system. This talk provides a summary of the upgrades to from SNO to SNO +, discussing the requirements on and methods of optical calibration, focusing on the deployed laserball and ELLIE system.
CryoSat-2 SIRAL Calibration: Strategy, Application and Results
NASA Astrophysics Data System (ADS)
Parrinello, T.; Fornari, M.; Bouzinac, C.; Scagliola, M.; Tagliani, N.
2012-04-01
The main payload of CryoSat-2 is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is an important improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only the corrections for transfer function amplitude with respect to frequency, gain and instrument path delay have to be computed but it is also needed to provide corrections for transfer function phase with respect to frequency and AGC setting as well as the phase variation across bursts of pulses. As a consequence, SIRAL performs regularly four types of calibrations: (1) CAL1 in order to calibrate the internal path delay and peak power variation, (2) CAL2 in order to compensate the instrument transfer function, (3) CAL4 to calibrate the interferometer and (4) AutoCal, a specific sequence in order to calibrate the gain and phase difference for each AGC setting. Commissioning phase results (April-December 2010) revealed high stability of the instrument, which made possible to reduce the calibration frequency during Operations. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In this poster we will describe as first the calibration strategy and then how the four different types of calibration are applied to science data. Moreover the calibration results over almost 2 years of mission will be presented, analyzing their temporal evolution in order to highlight the stability of the instrument over its life.
Crop physiology calibration in the CLM
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
2015-04-15
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.
Crop physiology calibration in the CLM
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
2015-04-15
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurementsmore » of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.« less
More robust model built using SEM calibration
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2007-10-01
More robust Optical Proximity Correction (OPC) model is highly required with integrated circuits' CD (Critical Dimension) being smaller. Generally a lot of wafer data of line-end features need to be collected for modeling. Scanning Electron Microscope (SEM) images are sources that include vast 2D information. Adding SEM images calibration into current model flow will be preferred. This paper presents a method using Mentor Graphics' Calibre SEMcal and ContourCal to integrated SEM calibration into model flow. Firstly simulated contour is generated and aligned with SEM image automatically. Secondly contour is edited by fixing the gap etc. CD measurement spots are applied also to get a more accurate contour. Lastly the final contour is extracted and inputted to the model flow. EPE will be calculated from SEM image contour. Thus a more stable and robust OPC model is generated. SEM calibration can accommodate structures such as asymmetrical CDs, line end pullbacks and corner rounding etc and save a lot of time on measuring line end wafer CD.
Computerized Techniques for Calibrating Pressure Balances
NASA Astrophysics Data System (ADS)
Simpson, D. I.
1994-01-01
Pressure balances are generally calibrated by the cross-floating technique, where the forces acting on two similar devices in hydrostatic equilibrium are compared. It is a skilled and time-consuming process which has not previously lent itself to significant automation; computers have mostly been used only to calculate results after measurements have been taken. The objective of the present work was to develop real-time computerized measurement techniques to ease the calibration task, which would fully integrate into a single package with versatile software for calculating and displaying results. The calibration process is now conducted by studying graphical computer displays which derive their inputs from differential-pressure transducers and capacitance or optical displacement sensors. The mass imbalance between oil-operated pressure balances is calculated by interpolating between changes in piston rate-of-fall. Differential-pressure transducers are used to estimate mass imbalances between gas-operated balances, and a quick in situ method for determining their sensitivity has been developed. The new techniques have been successfully applied to a variety of pressure balance designs and substantial reductions in calibration times have been achieved. Reduced levels of scatter have revealed small systematic differences between gauge and absolute modes of operation.
Online Sensor Calibration Monitoring Uncertainty Estimation
Hines, J. Wesley; Rasmussen, Brandon
2005-09-15
Empirical modeling techniques have been applied to online process monitoring to detect equipment and instrumentation degradations. However, few applications provide prediction uncertainty estimates, which can provide a measure of confidence in decisions. This paper presents the development of analytical prediction interval estimation methods for three common nonlinear empirical modeling strategies: artificial neural networks, neural network partial least squares, and local polynomial regression. The techniques are applied to nuclear power plant operational data for sensor calibration monitoring, and the prediction intervals are verified via bootstrap simulation studies.
Multivariate intralocus sexual conflict in seed beetles.
Berger, David; Berg, Elena C; Widegren, William; Arnqvist, Göran; Maklakov, Alexei A
2014-12-01
Intralocus sexual conflict (IaSC) is pervasive because males and females experience differences in selection but share much of the same genome. Traits with integrated genetic architecture should be reservoirs of sexually antagonistic genetic variation for fitness, but explorations of multivariate IaSC are scarce. Previously, we showed that upward artificial selection on male life span decreased male fitness but increased female fitness compared with downward selection in the seed beetle Callosobruchus maculatus. Here, we use these selection lines to investigate sex-specific evolution of four functionally integrated traits (metabolic rate, locomotor activity, body mass, and life span) that collectively define a sexually dimorphic life-history syndrome in many species. Male-limited selection for short life span led to correlated evolution in females toward a more male-like multivariate phenotype. Conversely, males selected for long life span became more female-like, implying that IaSC results from genetic integration of this suite of traits. However, while life span, metabolism, and body mass showed correlated evolution in the sexes, activity did not evolve in males but, surprisingly, did so in females. This led to sexual monomorphism in locomotor activity in short-life lines associated with detrimental effects in females. Our results thus support the general tenet that widespread pleiotropy generates IaSC despite sex-specific genetic architecture.
Fast Multivariate Search on Large Aviation Datasets
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Zhu, Qiang; Oza, Nikunj C.; Srivastava, Ashok N.
2010-01-01
Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which can contain up to several gigabytes of data. Surprisingly, research on MTS search is very limited. Most existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two provably correct algorithms to solve this problem (1) an R-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences, and (2) a List Based Search (LBS) algorithm which uses sorted lists for indexing. We demonstrate the performance of these algorithms using two large MTS databases from the aviation domain, each containing several millions of observations Both these tests show that our algorithms have very high prune rates (>95%) thus needing actual
Network structure of multivariate time series.
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-21
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Network structure of multivariate time series
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-01
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Benchmarking a reduced multivariate polynomial pattern classifier.
Toh, Kar-Ann; Tran, Quoc-Long; Srinivasan, Dipti
2004-06-01
A novel method using a reduced multivariate polynomial model has been developed for biometric decision fusion where simplicity and ease of use could be a concern. However, much to our surprise, the reduced model was found to have good classification accuracy for several commonly used data sets from the Web. In this paper, we extend the single output model to a multiple outputs model to handle multiple class problems. The method is particularly suitable for problems with small number of features and large number of examples. Basic component of this polynomial model boils down to construction of new pattern features which are sums of the original features and combination of these new and original features using power and product terms. A linear regularized least-squares predictor is then built using these constructed features. The number of constructed feature terms varies linearly with the order of the polynomial, instead of having a power law in the case of full multivariate polynomials. The method is simple as it amounts to only a few lines of Matlab code. We perform extensive experiments on this reduced model using 42 data sets. Our results compared remarkably well with best reported results of several commonly used algorithms from the literature. Both the classification accuracy and efficiency aspects are reported for this reduced model.
An Integrated Multivariable Artificial Pancreas Control System
Turksoy, Kamuran; Quinn, Lauretta T.; Littlejohn, Elizabeth
2014-01-01
The objective was to develop a closed-loop (CL) artificial pancreas (AP) control system that uses continuous measurements of glucose concentration and physiological variables, integrated with a hypoglycemia early alarm module to regulate glucose concentration and prevent hypoglycemia. Eleven open-loop (OL) and 9 CL experiments were performed. A multivariable adaptive artificial pancreas (MAAP) system was used for the first 6 CL experiments. An integrated multivariable adaptive artificial pancreas (IMAAP) system consisting of MAAP augmented with a hypoglycemia early alarm system was used during the last 3 CL experiments. Glucose values and physical activity information were measured and transferred to the controller every 10 minutes and insulin suggestions were entered to the pump manually. All experiments were designed to be close to real-life conditions. Severe hypoglycemic episodes were seen several times during the OL experiments. With the MAAP system, the occurrence of severe hypoglycemia was decreased significantly (P < .01). No hypoglycemia was seen with the IMAAP system. There was also a significant difference (P < .01) between OL and CL experiments with regard to percentage of glucose concentration (54% vs 58%) that remained within target range (70-180 mg/dl). Integration of an adaptive control and hypoglycemia early alarm system was able to keep glucose concentration values in target range in patients with type 1 diabetes. Postprandial hypoglycemia and exercise-induced hypoglycemia did not occur when this system was used. Physical activity information improved estimation of the blood glucose concentration and effectiveness of the control system. PMID:24876613
An integrated multivariable artificial pancreas control system.
Turksoy, Kamuran; Quinn, Lauretta T; Littlejohn, Elizabeth; Cinar, Ali
2014-05-01
The objective was to develop a closed-loop (CL) artificial pancreas (AP) control system that uses continuous measurements of glucose concentration and physiological variables, integrated with a hypoglycemia early alarm module to regulate glucose concentration and prevent hypoglycemia. Eleven open-loop (OL) and 9 CL experiments were performed. A multivariable adaptive artificial pancreas (MAAP) system was used for the first 6 CL experiments. An integrated multivariable adaptive artificial pancreas (IMAAP) system consisting of MAAP augmented with a hypoglycemia early alarm system was used during the last 3 CL experiments. Glucose values and physical activity information were measured and transferred to the controller every 10 minutes and insulin suggestions were entered to the pump manually. All experiments were designed to be close to real-life conditions. Severe hypoglycemic episodes were seen several times during the OL experiments. With the MAAP system, the occurrence of severe hypoglycemia was decreased significantly (P < .01). No hypoglycemia was seen with the IMAAP system. There was also a significant difference (P < .01) between OL and CL experiments with regard to percentage of glucose concentration (54% vs 58%) that remained within target range (70-180 mg/dl). Integration of an adaptive control and hypoglycemia early alarm system was able to keep glucose concentration values in target range in patients with type 1 diabetes. Postprandial hypoglycemia and exercise-induced hypoglycemia did not occur when this system was used. Physical activity information improved estimation of the blood glucose concentration and effectiveness of the control system.
Network structure of multivariate time series
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-01-01
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040
NASA Metrology and Calibration, 1980
NASA Technical Reports Server (NTRS)
1981-01-01
The proceedings of the fourth annual NASA Metrology and Calibration Workshop are presented. This workshop covered (1) review and assessment of NASA metrology and calibration activities by NASA Headquarters, (2) results of audits by the Office of Inspector General, (3) review of a proposed NASA Equipment Management System, (4) current and planned field center activities, (5) National Bureau of Standards (NBS) calibration services for NASA, (6) review of NBS's Precision Measurement and Test Equipment Project activities, (7) NASA instrument loan pool operations at two centers, (8) mobile cart calibration systems at two centers, (9) calibration intervals and decals, (10) NASA Calibration Capabilities Catalog, and (11) development of plans and objectives for FY 1981. Several papers in this proceedings are slide presentations only.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.
2002-01-01
The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming
2003-01-01
This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.
Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System
NASA Astrophysics Data System (ADS)
Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.
2016-06-01
A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.
Extending Sensor Calibration Intervals in Nuclear Power Plants
Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Shumaker, Brent; Hashemian, Hash
2012-11-15
Currently in the USA, sensor recalibration is required at every refueling outage, and it has emerged as a critical path item for shortening outage duration. International application of calibration monitoring, such as at the Sizewell B plant in UK, has shown that sensors may operate for eight years, or longer, within calibration tolerances. Online monitoring can be employed to identify those sensors which require calibration, allowing for calibration of only those sensors which need it. The US NRC accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no plants have been granted the necessary license amendment to apply it. This project addresses key issues in advanced recalibration methodologies and provides the science base to enable adoption of best practices for applying online monitoring, resulting in a public domain standardized methodology for sensor calibration interval extension. Research to develop this methodology will focus on three key areas: (1) quantification of uncertainty in modeling techniques used for calibration monitoring, with a particular focus on non-redundant sensor models; (2) accurate determination of acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and (3) the use of virtual sensor estimates to replace identified faulty sensors to extend operation to the next convenient maintenance opportunity.
In situ hydrodynamic lateral force calibration of AFM colloidal probes.
Ryu, Sangjin; Franck, Christian
2011-11-01
Lateral force microscopy (LFM) is an application of atomic force microscopy (AFM) to sense lateral forces applied to the AFM probe tip. Recent advances in tissue engineering and functional biomaterials have shown a need for the surface characterization of their material and biochemical properties under the application of lateral forces. LFM equipped with colloidal probes of well-defined tip geometries has been a natural fit to address these needs but has remained limited to provide primarily qualitative results. For quantitative measurements, LFM requires the successful determination of the lateral force or torque conversion factor of the probe. Usually, force calibration results obtained in air are used for force measurements in liquids, but refractive index differences between air and liquids induce changes in the conversion factor. Furthermore, in the case of biochemically functionalized tips, damage can occur during calibration because tip-surface contact is inevitable in most calibration methods. Therefore, a nondestructive in situ lateral force calibration is desirable for LFM applications in liquids. Here we present an in situ hydrodynamic lateral force calibration method for AFM colloidal probes. In this method, the laterally scanned substrate surface generated a creeping Couette flow, which deformed the probe under torsion. The spherical geometry of the tip enabled the calculation of tip drag forces, and the lateral torque conversion factor was calibrated from the lateral voltage change and estimated torque. Comparisons with lateral force calibrations performed in air show that the hydrodynamic lateral force calibration method enables quantitative lateral force measurements in liquid using colloidal probes.
A six-component force/moment sensor calibration stand
NASA Astrophysics Data System (ADS)
Estlow, Edward G. W.; Kovacevic, Nebojsa
1990-06-01
A compact portable stand for calibration of multicomponent internal balances is described. The stand is designed to control/eliminate misalignments between load trains and the balance being calibrated; it generates forces and moments with pneumatic cylinders for all but rolling moment, which is applied with conventional weights. Load application control is discussed, and performance is analyzed. It is noted that the calibration stand has the ability to sense off-axis loads resulting from distortion/deflections due to the primary loading. Having sensed these off-axis loads, the system can be adjusted to minimize or eliminate them while retaining correct alignment of the primary load with the balance.
Calibration Designs for Non-Monolithic Wind Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Johnson, Thomas H.; Parker, Peter A.; Landman, Drew
2010-01-01
This research paper investigates current experimental designs and regression models for calibrating internal wind tunnel force balances of non-monolithic design. Such calibration methods are necessary for this class of balance because it has an electrical response that is dependent upon the sign of the applied forces and moments. This dependency gives rise to discontinuities in the response surfaces that are not easily modeled using traditional response surface methodologies. An analysis of current recommended calibration models is shown to lead to correlated response model terms. Alternative modeling methods are explored which feature orthogonal or near-orthogonal terms.
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.