Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Deterministic annealing for density estimation by multivariate normal mixtures
NASA Astrophysics Data System (ADS)
Kloppenburg, Martin; Tavan, Paul
1997-03-01
An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.
Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn
2013-01-01
Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
1983-06-16
has been advocated by Gnanadesikan and ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
A method of using cluster analysis to study statistical dependence in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
NASA Astrophysics Data System (ADS)
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data.
Carroll, Rachel; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Aregay, Mehreteab; Watjou, Kevin
2017-05-09
Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Problems with Multivariate Normality: Can the Multivariate Bootstrap Help?
ERIC Educational Resources Information Center
Thompson, Bruce
Multivariate normality is required for some statistical tests. This paper explores the implications of violating the assumption of multivariate normality and illustrates a graphical procedure for evaluating multivariate normality. The logic for using the multivariate bootstrap is presented. The multivariate bootstrap can be used when distribution…
Processes of Heat Transfer in Rheologically Unstable Mixtures of Organic Origin
NASA Astrophysics Data System (ADS)
Tkachenko, S. I.; Pishenina, N. V.; Rumyantseva, T. Yu.
2014-05-01
The dependence of the coefficient of heat transfer from the heat-exchange surface to a rheologically unstable organic mixture on the thermohydrodynamic state of the mixture and its prehistory has been established. A method for multivariant investigation of the process of heat transfer in compound organic mixtures has been proposed; this method makes it possible to evaluate the character and peculiarities of change in the rheological structure of the mixture as functions of the thermohydrodynamic conditions of its treatment. The possibility of evaluating the intensity of heat transfer in a biotechnological system for production of energy carriers at the step of its designing by multivariant investigation of the heat-transfer intensity in rheologically unstable organic mixtures with account of their prehistory has been shown.
Moran, Patrick W.; Nowell, Lisa H.; Kemble, Nile E.; Mahler, Barbara J.; Waite, Ian R.; Van Metre, Peter C.
2017-01-01
Simultaneous assessment of sediment chemistry, sediment toxicity, and macroinvertebrate communities can provide multiple lines of evidence when investigating relations between sediment contaminants and ecological degradation. These three measures were evaluated at 99 wadable stream sites across 11 states in the Midwestern United States during the summer of 2013 to assess sediment pollution across a large agricultural landscape. This evaluation considers an extensive suite of sediment chemistry totaling 274 analytes (polycyclic aromatic hydrocarbons, organochlorine compounds, polychlorinated biphenyls, polybrominated diphenyl ethers, trace elements, and current-use pesticides) and a mixture assessment based on the ratios of detected compounds to available effects-based benchmarks. The sediments were tested for toxicity with the amphipod Hyalella azteca (28-d exposure), the midge Chironomus dilutus (10-d), and, at a few sites, with the freshwater mussel Lampsilis siliquoidea (28-d). Sediment concentrations, normalized to organic carbon content, infrequently exceeded benchmarks for aquatic health, which was generally consistent with low rates of observed toxicity. However, the benchmark-based mixture score and the pyrethroid insecticide bifenthrin were significantly related to observed sediment toxicity. The sediment mixture score and bifenthrin were also significant predictors of the upper limits of several univariate measures of the macroinvertebrate community (EPT percent, MMI (Macroinvertebrate Multimetric Index) Score, Ephemeroptera and Trichoptera richness) using quantile regression. Multivariate pattern matching (Mantel-like tests) of macroinvertebrate species per site to identified contaminant metrics and sediment toxicity also indicate that the sediment mixture score and bifenthrin have weak, albeit significant, influence on the observed invertebrate community composition. Together, these three lines of evidence (toxicity tests, univariate metrics, and multivariate community analysis) suggest that elevated contaminant concentrations in sediments, in particular bifenthrin, is limiting macroinvertebrate communities in several of these Midwest streams.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
Chabreyrie, David; Chauvet, Serge; Guyon, François; Salagoïty, Marie-Hélène; Antinelli, Jean-François; Medina, Bernard
2008-08-27
Protein profiles, obtained by high-performance capillary electrophoresis (HPCE) on white wines previously dialyzed, combined with shikimic acid concentration and multivariate analysis, were used for the determination of grape variety composition of a still white wine. Six varieties were studied through monovarietal wines elaborated in the laboratory: Chardonnay (24 samples), Chenin (24), Petit Manseng (7), Sauvignon (37), Semillon (24), and Ugni Blanc (9). Homemade mixtures were elaborated from authentic monovarietal wines according to a Plackett-Burman sampling plan. After protein peak area normalization, a matrix was elaborated containing protein results of wines (mixtures and monovarietal). Partial least-squares processing was applied to this matrix allowing the elaboration of a model that provided a varietal quantification precision of around 20% for most of the grape varieties studied. The model was applied to commercial samples from various geographical origins, providing encouraging results for control purposes.
Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin
2017-01-01
Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schölzel, C.; Friederichs, P.
2008-10-01
Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.
NASA Astrophysics Data System (ADS)
Rachmawati; Rohaeti, E.; Rafi, M.
2017-05-01
Taro flour on the market is usually sold at higher price than wheat and sago flour. This situation could be a cause for adulteration of taro flour from wheat and sago flour. For this reason, we will need an identification and authentication. Combination of near infrared (NIR) spectrum with multivariate analysis was used in this study to identify and authenticate taro flour from wheat and sago flour. The authentication model of taro flour was developed by using a mixture of 5%, 25%, and 50% of adulterated taro flour from wheat and sago flour. Before subjected to multivariate analysis, an initial preprocessing signal was used namely normalization and standard normal variate to the NIR spectrum. We used principal component analysis followed by discriminant analysis to make an identification and authentication model of taro flour. From the result obtained, about 90.48% of the taro flour mixed with wheat flour and 85% of taro flour mixed with sago flour were successfully classified into their groups. So the combination of NIR spectrum with chemometrics could be used for identification and authentication of taro flour from wheat and sago flour.
Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu
2017-01-01
The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Analysis of Forest Foliage Using a Multivariate Mixture Model
NASA Technical Reports Server (NTRS)
Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.
1997-01-01
Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
The Effect of the Multivariate Box-Cox Transformation on the Power of MANOVA.
ERIC Educational Resources Information Center
Kirisci, Levent; Hsu, Tse-Chi
Most of the multivariate statistical techniques rely on the assumption of multivariate normality. The effects of non-normality on multivariate tests are assumed to be negligible when variance-covariance matrices and sample sizes are equal. Therefore, in practice, investigators do not usually attempt to remove non-normality. In this simulation…
Deconstructing multivariate decoding for the study of brain function.
Hebart, Martin N; Baker, Chris I
2017-08-04
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.
Calvo, Natalia L; Arias, Juan M; Altabef, Aída Ben; Maggio, Rubén M; Kaufman, Teodoro S
2016-09-10
Albendazole (ALB) is a broad-spectrum anthelmintic, which exhibits two solid-state forms (Forms I and II). The Form I is the metastable crystal at room temperature, while Form II is the stable one. Because the drug has poor aqueous solubility and Form II is less soluble than Form I, it is desirable to have a method to assess the solid-state form of the drug employed for manufacturing purposes. Therefore, a Partial Least Squares (PLS) model was developed for the determination of Form I of ALB in its mixtures with Form II. For model development, both solid-state forms of ALB were prepared and characterized by microscopic (optical and with normal and polarized light), thermal (DSC) and spectroscopic (ATR-FTIR, Raman) techniques. Mixtures of solids in different ratios were prepared by weighing and mechanical mixing of the components. Their Raman spectra were acquired, and subjected to peak smoothing, normalization, standard normal variate correction and de-trending, before performing the PLS calculations. The optimal spectral region (1396-1280cm(-1)) and number of latent variables (LV=3) were obtained employing a moving window of variable size strategy. The method was internally validated by means of the leave one out procedure, providing satisfactory statistics (r(2)=0.9729 and RMSD=5.6%) and figures of merit (LOD=9.4% and MDDC=1.4). Furthermore, the method's performance was also evaluated by analysis of two validation sets. Validation set I was used for assessment of linearity and range and Validation set II, to demonstrate accuracy and precision (Recovery=101.4% and RSD=2.8%). Additionally, a third set of spiked commercial samples was evaluated, exhibiting excellent recoveries (94.2±6.4%). The results suggest that the combination of Raman spectroscopy with multivariate analysis could be applied to the assessment of the main crystal form and its quantitation in samples of ALB bulk drug, in the routine quality control laboratory. Copyright © 2016 Elsevier B.V. All rights reserved.
Using cystoscopy to segment bladder tumors with a multivariate approach in different color spaces.
Freitas, Nuno R; Vieira, Pedro M; Lima, Estevao; Lima, Carlos S
2017-07-01
Nowadays the diagnosis of bladder lesions relies upon cystoscopy examination and depends on the interpreter's experience. State of the art of bladder tumor identification are based on 3D reconstruction, using CT images (Virtual Cystoscopy) or images where the structures are exalted with the use of pigmentation, but none uses white light cystoscopy images. An initial attempt to automatically identify tumoral tissue was already developed by the authors and this paper will develop this idea. Traditional cystoscopy images processing has a huge potential to improve early tumor detection and allows a more effective treatment. In this paper is described a multivariate approach to do segmentation of bladder cystoscopy images, that will be used to automatically detect and improve physician diagnose. Each region can be assumed as a normal distribution with specific parameters, leading to the assumption that the distribution of intensities is a Gaussian Mixture Model (GMM). Region of high grade and low grade tumors, usually appears with higher intensity than normal regions. This paper proposes a Maximum a Posteriori (MAP) approach based on pixel intensities read simultaneously in different color channels from RGB, HSV and CIELab color spaces. The Expectation-Maximization (EM) algorithm is used to estimate the best multivariate GMM parameters. Experimental results show that the proposed method does bladder tumor segmentation into two classes in a more efficient way in RGB even in cases where the tumor shape is not well defined. Results also show that the elimination of component L from CIELab color space does not allow definition of the tumor shape.
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Candace; Profeta, Luisa; Akpovo, Codjo
The psuedo univariate limit of detection was calculated to compare to the multivariate interval. ompared with results from the psuedounivariate LOD, the multivariate LOD includes other factors (i.e. signal uncertainties) and the reveals the significance in creating models that not only use the analyte’s emission line but also its entire molecular spectra.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
A scoring metric for multivariate data for reproducibility analysis using chemometric methods
Sheen, David A.; de Carvalho Rocha, Werickson Fortunato; Lippa, Katrice A.; Bearden, Daniel W.
2017-01-01
Process quality control and reproducibility in emerging measurement fields such as metabolomics is normally assured by interlaboratory comparison testing. As a part of this testing process, spectral features from a spectroscopic method such as nuclear magnetic resonance (NMR) spectroscopy are attributed to particular analytes within a mixture, and it is the metabolite concentrations that are returned for comparison between laboratories. However, data quality may also be assessed directly by using binned spectral data before the time-consuming identification and quantification. Use of the binned spectra has some advantages, including preserving information about trace constituents and enabling identification of process difficulties. In this paper, we demonstrate the use of binned NMR spectra to conduct a detailed interlaboratory comparison and composition analysis. Spectra of synthetic and biologically-obtained metabolite mixtures, taken from a previous interlaboratory study, are compared with cluster analysis using a variety of distance and entropy metrics. The individual measurements are then evaluated based on where they fall within their clusters, and a laboratory-level scoring metric is developed, which provides an assessment of each laboratory’s individual performance. PMID:28694553
MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)
Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...
Using partially labeled data for normal mixture identification with application to class definition
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
Defining an additivity framework for mixture research in inducible whole-cell biosensors
NASA Astrophysics Data System (ADS)
Martin-Betancor, K.; Ritz, C.; Fernández-Piñas, F.; Leganés, F.; Rodea-Palomares, I.
2015-11-01
A novel additivity framework for mixture effect modelling in the context of whole cell inducible biosensors has been mathematically developed and implemented in R. The proposed method is a multivariate extension of the effective dose (EDp) concept. Specifically, the extension accounts for differential maximal effects among analytes and response inhibition beyond the maximum permissive concentrations. This allows a multivariate extension of Loewe additivity, enabling direct application in a biphasic dose-response framework. The proposed additivity definition was validated, and its applicability illustrated by studying the response of the cyanobacterial biosensor Synechococcus elongatus PCC 7942 pBG2120 to binary mixtures of Zn, Cu, Cd, Ag, Co and Hg. The novel method allowed by the first time to model complete dose-response profiles of an inducible whole cell biosensor to mixtures. In addition, the approach also allowed identification and quantification of departures from additivity (interactions) among analytes. The biosensor was found to respond in a near additive way to heavy metal mixtures except when Hg, Co and Ag were present, in which case strong interactions occurred. The method is a useful contribution for the whole cell biosensors discipline and related areas allowing to perform appropriate assessment of mixture effects in non-monotonic dose-response frameworks
NASA Astrophysics Data System (ADS)
Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.
2015-04-01
Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.
Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I
2015-04-05
Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan
2008-01-01
Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…
Intestinal absorption of an arginine-containing peptide in cystinuria
Asatoor, A. M.; Harrison, B. D. W.; Milne, M. D.; Prosser, D. I.
1972-01-01
Separate tolerance tests involving oral intake of the dipeptide, L-arginyl-L-aspartate, and of a corresponding free amino acid mixture, were carried out in a single type 2 cystinuric patient. Absorption of aspartate was within normal limits, whilst that of arginine was normal after the peptide but considerably reduced after the amino acid mixture. The results are compared with the increments of serum arginine found in eight normal subjects after the oral intake of the free amino acid mixture. Analyses of urinary pyrrolidine and of tetramethylenediamine in urine samples obtained after the two tolerance tests in the patient support the view that arginine absorption was subnormal after the amino acid mixture but within normal limits after the dipeptide. PMID:5045711
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
USDA-ARS?s Scientific Manuscript database
Experimental designs developed to address mixtures are ideally suited for many areas of experimental biology including pheromone blend studies because they address the confounding of proportionality and concentration intrinsic to factorial and one-factor-at-a-time designs. Geometric multivariate des...
NASA Astrophysics Data System (ADS)
Naguib, Ibrahim A.; Darwish, Hany W.
2012-02-01
A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.
Continuous plutonium dissolution apparatus
Meyer, F.G.; Tesitor, C.N.
1974-02-26
This invention is concerned with continuous dissolution of metals such as plutonium. A high normality acid mixture is fed into a boiler vessel, vaporized, and subsequently condensed as a low normality acid mixture. The mixture is then conveyed to a dissolution vessel and contacted with the plutonium metal to dissolve the plutonium in the dissolution vessel, reacting therewith forming plutonium nitrate. The reaction products are then conveyed to the mixing vessel and maintained soluble by the high normality acid, with separation and removal of the desired constituent. (Official Gazette)
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-11-04
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Inferring network structure in non-normal and mixed discrete-continuous genomic data.
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2018-03-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.
Inferring network structure in non-normal and mixed discrete-continuous genomic data
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2017-01-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848
NASA Technical Reports Server (NTRS)
Marble, Frank E.; Ritter, William K.; Miller, Mahlon A.
1946-01-01
For the normal range of engine power the impeller provided marked improvement over the standard spray-bar injection system. Mixture distribution at cruising was excellent, maximum cylinder temperatures were reduced about 30 degrees F, and general temperature distribution was improved. The uniform mixture distribution restored the normal response of cylinder temperature to mixture enrichment and it reduced the possibility of carburetor icing, while no serious loss in supercharger pressure rise resulted from injection of fuel near the impeller outlet. The injection impeller also furnished a convenient means of adding water to the charge mixture for internal cooling.
[Effects of different excipients on properties of Tongsaimai mixture and pellet molding].
Wang, Jin; Lv, Zhiyang; Wu, Xiaoyan; Di, Liuqing; Dong, Yu; Cai, Baochang
2011-01-01
To study preliminarily on the relationship between properties of the mixture composed of Tongsaimai extract and different excipients and pellet molding. The multivariate regression analysis was used to investigate the correlation of different mixture and pellet molding by measuring the cohesion, liquid-plastic limit of mixture, and the powder properties of pellets. The weighted coefficients of the powder properties were determined by analytic hierarchy process combined with criteria importance through intercriteria correlation. The results showed that liquid-plastic limit seemed to be a major factor, which had positive correlation with pellet molding, while cohesion had negative correlation with pellet molding in the measured range. The physical properties of the mixture has marked influence on pellet molding.
Hegazy, M A; Yehia, A M; Moustafa, A A
2013-05-01
The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Smith, Zachary J; Strombom, Sven; Wachsmann-Hogiu, Sebastian
2011-08-29
A multivariate optical computer has been constructed consisting of a spectrograph, digital micromirror device, and photomultiplier tube that is capable of determining absolute concentrations of individual components of a multivariate spectral model. We present experimental results on ternary mixtures, showing accurate quantification of chemical concentrations based on integrated intensities of fluorescence and Raman spectra measured with a single point detector. We additionally show in simulation that point measurements based on principal component spectra retain the ability to classify cancerous from noncancerous T cells.
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Kazi, Tasneem Gul; Afridi, Hassan Imran; Ali, Jamshed; Baig, Jameel Ahmed; Arain, Mohammad Balal; Khan, Mustafa
2017-08-01
A green tunable dispersive liquid-liquid micro extraction (TDLLME) technique was established for the simultaneous enrichment of lead (Pb) and cadmium (Cd) from different lakes water before analysis by flame atomic absorption spectrometry (FAAS). A solvent known as tunable polarity solvent (TPS), mixture of 1,8-diazabicyclo-[5.4.0]-undec-7-ene (DBU) and 1-decanol, has been employed as extractant in aqueous medium. In first step this mixture can be made polar by slowly bubbling the antisolvent trigger (CO2) through the solution, which makes a monophasic solution. During this step hydrophobic complexes of the metals with 8-hydroxy quinoline (8-HQ) were extracted by TPS. Then the mixture was switched back to hydrophobic one by heating and/or bubbling nitrogen, turning the mixture into two phases again. In second phase the metals were leached out from the complexes entrapped in TPS, by treating with a solution of nitric acid and exposing the mixture to CO2, which switched the mixture into single phase. Then N2 purging and/or heating again turned the mixture into two phases. The acidic aqueous phase containing the metals was introduced to FAAS for analysis, whereas TPS was recycled for next experiment. Different parameters, affecting the efficiency the technique, were optimized by multivariate approach. The method was applied to certified reference material of water and to a real sample spiked with standards of known concentration, to confirm its validity and accuracy. LOD obtained for Pb and Cd were 0.560 and 0.056 μg L- 1 respectively. The developed method was applied successfully to the real water samples of two lakes of Sindh, Pakistan.
Protanomaly-without-darkened-red is deuteranopia with rods
Shevell, Steven K.; Sun, Yang; Neitz, Maureen
2008-01-01
The Rayleigh match, a color match between a mixture of 545+670 nm lights and 589 nm light in modern instruments, is the definitive measurement for the diagnosis of inherited red/green color defects. All trichromats, whether normal or anomalous, have a limited range of 545+670 nm mixtures they perceive to match 589 nm: a typical color-normal match-range is about 50–55% of 670 nm in the mixture (deutan mode), while deuteranomals have a range that includes mixtures with less 670 nm than normal and protanomals a range that includes mixtures with more 670 nm than normal. Further, the matching luminance of the 589 nm light for deuteranomals is the same as for normals but for protanomals is below normal. An example of an unexpected Rayleigh match, therefore, is a match range above normal (typical of protanomaly) and a normal luminance setting for 589 nm (typical of deuteranomaly), a match that Pickford (1950) called protanomaly “when the red end of the spectrum is not darkened”. In this case, Rayleigh matching does not yield a clear diagnosis. Aside from Pickford, we are aware of only one other report of a similar observer (Pokorny and Smith, 1981); this study predated modern genetic techniques that can reveal the cone photopigment(s) in the red/green range. We recently had the opportunity to conduct genetic and psychophysical tests on such an observer. Genetic results predict he is a deuteranope. His Rayleigh match is consistent with L cones and a contribution from rods. Further, with a rod-suppressing background, his Rayleigh match is characteristic of a single L-cone photopigment (deuteranopia). PMID:18423511
Protanomaly without darkened red is deuteranopia with rods.
Shevell, Steven K; Sun, Yang; Neitz, Maureen
2008-11-01
The Rayleigh match, a color match between a mixture of 545+670 nm lights and 589 nm light in modern instruments, is the definitive measurement for the diagnosis of inherited red-green color defects. All trichromats, whether normal or anomalous, have a limited range of 545+670 nm mixtures they perceive to match 589 nm: a typical color-normal match range is about 50-55% of 670 nm in the mixture (deutan mode), while deuteranomals have a range that includes mixtures with less 670 nm than normal and protanomals a range that includes mixtures with more 670 nm than normal. Further, the matching luminance of the 589 nm light for deuteranomals is the same as for normals but for protanomals is below normal. An example of an unexpected Rayleigh match, therefore, is a match range above normal (typical of protanomaly) and a normal luminance setting for 589 nm (typical of deuteranomaly), a match called protanomaly "when the red end of the spectrum is not darkened" [Pickford, R.W. (1950). Three pedigrees for color blindness. Nature, 165, 182.]. In this case, Rayleigh matching does not yield a clear diagnosis. Aside from Pickford, we are aware of only one other report of a similar observer [Pokorny, J., & Smith, V. C. (1981). A variant of red-green color defect. Vision Research, 21, 311-317]; this study predated modern genetic techniques that can reveal the cone photopigment(s) in the red-green range. We recently had the opportunity to conduct genetic and psychophysical tests on such an observer. Genetic results predict he is a deuteranope. His Rayleigh match is consistent with L cones and a contribution from rods. Further, with a rod-suppressing background, his Rayleigh match is characteristic of a single L-cone photopigment (deuteranopia).
Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong
2015-01-01
INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2 Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual’s total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3 In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence’s AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. A portion of these differences are due to the nature of the convenience (RIOPA) and representative (NHANES) sampling strategies used in the two studies. CONCLUSIONS Accurate models for exposure data, which can feature extreme values, multiple modes, data below the MDL, heterogeneous interpollutant dependency structures, and other complex characteristics, are needed to estimate exposures and risks and to develop control and management guidelines and policies. Conventional and novel statistical methods were applied to data drawn from two large studies to understand the nature and significance of VOC exposures. Both extreme value distributions and mixture models were found to provide excellent fit to single VOC compounds (univariate distributions), and copulas may be the method of choice for VOC mixtures (multivariate distributions), especially for the highest exposures, which fit parametric models poorly and which may represent the greatest health risk. The identification of exposure determinants, including the influence of both certain activities (e.g., pumping gas) and environments (e.g., residences), provides information that can be used to manage and reduce exposures. The results obtained using the RIOPA data set add to our understanding of VOC exposures and further investigations using a more representative population and a wider suite of VOCs are suggested to extend and generalize results. PMID:25145040
Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.
Thulin, M
2016-09-10
Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Romine, Jason G.; Perry, Russell W.; Johnston, Samuel V.; Fitzer, Christopher W.; Pagliughi, Stephen W.; Blake, Aaron R.
2013-01-01
Mixture models proved valuable as a means to differentiate between salmonid smolts and predators that consumed salmonid smolts. However, successful application of this method requires that telemetered fishes and their predators exhibit measurable differences in movement behavior. Our approach is flexible, allows inclusion of multiple track statistics and improves upon rule-based manual classification methods.
NASA Astrophysics Data System (ADS)
Rojek, Barbara; Wesolowski, Marek; Suchacz, Bogdan
2013-12-01
In the paper infrared (IR) spectroscopy and multivariate exploration techniques: principal component analysis (PCA) and cluster analysis (CA) were applied as supportive methods for the detection of physicochemical incompatibilities between baclofen and excipients. In the course of research, the most useful rotational strategy in PCA proved to be varimax normalized, while in CA Ward's hierarchical agglomeration with Euclidean distance measure enabled to yield the most interpretable results. Chemometrical calculations confirmed the suitability of PCA and CA as the auxiliary methods for interpretation of infrared spectra in order to recognize whether compatibilities or incompatibilities between active substance and excipients occur. On the basis of IR spectra and the results of PCA and CA it was possible to demonstrate that the presence of lactose, β-cyclodextrin and meglumine in binary mixtures produce interactions with baclofen. The results were verified using differential scanning calorimetry, differential thermal analysis, thermogravimetry/differential thermogravimetry and X-ray powder diffraction analyses.
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Method for identifying known materials within a mixture of unknowns
Wagner, John S.
2000-01-01
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy.
System for identifying known materials within a mixture of unknowns
Wagner, John S.
1999-01-01
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy.
System for identifying known materials within a mixture of unknowns
Wagner, J.S.
1999-07-20
One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy. 37 figs.
NASA Astrophysics Data System (ADS)
Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia
2015-02-01
Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.
Anomaly detection of microstructural defects in continuous fiber reinforced composites
NASA Astrophysics Data System (ADS)
Bricker, Stephen; Simmons, J. P.; Przybyla, Craig; Hardie, Russell
2015-03-01
Ceramic matrix composites (CMC) with continuous fiber reinforcements have the potential to enable the next generation of high speed hypersonic vehicles and/or significant improvements in gas turbine engine performance due to their exhibited toughness when subjected to high mechanical loads at extreme temperatures (2200F+). Reinforced fiber composites (RFC) provide increased fracture toughness, crack growth resistance, and strength, though little is known about how stochastic variation and imperfections in the material effect material properties. In this work, tools are developed for quantifying anomalies within the microstructure at several scales. The detection and characterization of anomalous microstructure is a critical step in linking production techniques to properties, as well as in accurate material simulation and property prediction for the integrated computation materials engineering (ICME) of RFC based components. It is desired to find statistical outliers for any number of material characteristics such as fibers, fiber coatings, and pores. Here, fiber orientation, or `velocity', and `velocity' gradient are developed and examined for anomalous behavior. Categorizing anomalous behavior in the CMC is approached by multivariate Gaussian mixture modeling. A Gaussian mixture is employed to estimate the probability density function (PDF) of the features in question, and anomalies are classified by their likelihood of belonging to the statistical normal behavior for that feature.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-01
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
14 CFR 23.1147 - Mixture controls.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Mixture controls. 23.1147 Section 23.1147... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1147 Mixture controls. (a) If there are mixture controls, each engine must have a separate...
14 CFR 23.1147 - Mixture controls.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Mixture controls. 23.1147 Section 23.1147... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1147 Mixture controls. (a) If there are mixture controls, each engine must have a separate...
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
14 CFR 27.1147 - Mixture controls.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Mixture controls. 27.1147 Section 27.1147... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Powerplant Controls and Accessories § 27.1147 Mixture controls. If there are mixture controls, each engine must have a separate control and the controls must be...
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew
2013-01-01
Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.
NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES
He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.
2017-01-01
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-07-01
What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.
Empirical performance of the multivariate normal universal portfolio
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Pang, Sook Theng
2013-09-01
Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.
Use of an Amino Acid Mixture in Treatment of Phenylketonuria
Bentovim, A.; Clayton, Barbara E.; Francis, Dorothy E. M.; Shepherd, Jean; Wolff, O. H.
1970-01-01
Twelve children with phenylketonuria diagnosed and treated from the first few weeks of life were grouped into pairs. Before the trial all of them were receiving a commercial preparation containing a protein hydrolysate low in phenylalanine (Cymogran, Allen and Hanburys Ltd.) as a substitute for natural protein. One of each pair was given an amino acid mixture instead of Cymogran for about 6 months. Use of the mixture involved considerable modification of the diet, and in particular the inclusion of greater amounts of phenylalanine-free foods. All six accepted the new mixture without difficulty, food problems were greatly reduced, parents welcomed the new preparation, and the quality of family life improved. Normal growth was maintained and with a mixture of l amino acids the plasma and urinary amino acid levels were normal. Further studies are needed before the mixture can be recommended for children under 20 months of age. PMID:5477678
Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François
2015-06-01
In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Theodorakou, Chrysoula; Farquharson, Michael J.
2009-08-01
The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.
NASA Astrophysics Data System (ADS)
Mignani, A. G.; Ciaccheri, L.; Smith, P. R.; Cimato, A.; Attilio, C.; Huertas, R.; Melgosa Latorre, Manuel; Bertho, A. C.; O'Rourke, B.; McMillan, N. D.
2005-05-01
Scattered colorimetry, i.e., multi-angle and multi-wavelength absorption spectroscopy performed in the visible spectral range, was used to map three kinds of liquids: extra virgin olive oils, frying oils, and detergents in water. By multivariate processing of the spectral data, the liquids could be classified according to their intrinisic characteristics: geographic area of extra virgin olive oils, degradation of frying oils, and surfactant types and mixtures in water.
Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-05
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong
2014-06-01
Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)
Guo, Canyong; Luo, Xuefang; Zhou, Xiaohua; Shi, Beijia; Wang, Juanjuan; Zhao, Jinqi; Zhang, Xiaoxia
2017-06-05
Vibrational spectroscopic techniques such as infrared, near-infrared and Raman spectroscopy have become popular in detecting and quantifying polymorphism of pharmaceutics since they are fast and non-destructive. This study assessed the ability of three vibrational spectroscopy combined with multivariate analysis to quantify a low-content undesired polymorph within a binary polymorphic mixture. Partial least squares (PLS) regression and support vector machine (SVM) regression were employed to build quantitative models. Fusidic acid, a steroidal antibiotic, was used as the model compound. It was found that PLS regression performed slightly better than SVM regression in all the three spectroscopic techniques. Root mean square errors of prediction (RMSEP) were ranging from 0.48% to 1.17% for diffuse reflectance FTIR spectroscopy and 1.60-1.93% for diffuse reflectance FT-NIR spectroscopy and 1.62-2.31% for Raman spectroscopy. The results indicate that diffuse reflectance FTIR spectroscopy offers significant advantages in providing accurate measurement of polymorphic content in the fusidic acid binary mixtures, while Raman spectroscopy is the least accurate technique for quantitative analysis of polymorphs. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Ping; Harrington, Peter B
2008-02-01
A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).
Ozdemir, Durmus; Dinc, Erdal
2004-07-01
Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.
Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S
2010-07-12
A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. Copyright 2010 Elsevier B.V. All rights reserved.
On Some Multiple Decision Problems
1976-08-01
parameter space. Some recent results in the area of subset selection formulation are Gnanadesikan and Gupta [28], Gupta and Studden [43], Gupta and...York, pp. 363-376. [27) Gnanadesikan , M. (1966). Some Selection and Ranking Procedures for Multivariate Normal Populations. Ph.D. Thesis. Dept. of...Statist., Purdue Univ., West Lafayette, Indiana 47907. [28) Gnanadesikan , M. and Gupta, S. S. (1970). Selection procedures for multivariate normal
NASA Astrophysics Data System (ADS)
Wei, Linsheng; Xu, Min; Yuan, Dingkun; Zhang, Yafang; Hu, Zhaoji; Tan, Zhihong
2014-10-01
The electron drift velocity, electron energy distribution function (EEDF), density-normalized effective ionization coefficient and density-normalized longitudinal diffusion velocity are calculated in SF6-O2 and SF6-Air mixtures. The experimental results from a pulsed Townsend discharge are plotted for comparison with the numerical results. The reduced field strength varies from 40 Td to 500 Td (1 Townsend=10-17 V·cm2) and the SF6 concentration ranges from 10% to 100%. A Boltzmann equation associated with the two-term spherical harmonic expansion approximation is utilized to gain the swarm parameters in steady-state Townsend. Results show that the accuracy of the Boltzmann solution with a two-term expansion in calculating the electron drift velocity, electron energy distribution function, and density-normalized effective ionization coefficient is acceptable. The effective ionization coefficient presents a distinct relationship with the SF6 content in the mixtures. Moreover, the E/Ncr values in SF6-Air mixtures are higher than those in SF6-O2 mixtures and the calculated value E/Ncr in SF6-O2 and SF6-Air mixtures is lower than the measured value in SF6-N2. Parametric studies conducted on these parameters using the Boltzmann analysis offer substantial insight into the plasma physics, as well as a basis to explore the ozone generation process.
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
Determining inert content in coal dust/rock dust mixture
Sapko, Michael J.; Ward, Jr., Jack A.
1989-01-01
A method and apparatus for determining the inert content of a coal dust and rock dust mixture uses a transparent window pressed against the mixture. An infrared light beam is directed through the window such that a portion of the infrared light beam is reflected from the mixture. The concentration of the reflected light is detected and a signal indicative of the reflected light is generated. A normalized value for the generated signal is determined according to the relationship .phi.=(log i.sub.c `log i.sub.co) / (log i.sub.c100 -log i.sub.co) where i.sub.co =measured signal at 0% rock dust i.sub.c100 =measured signal at 100% rock dust i.sub.c =measured signal of the mixture. This normalized value is then correlated to a predetermined relationship of .phi. to rock dust percentage to determine the rock dust content of the mixture. The rock dust content is displayed where the percentage is between 30 and 100%, and an indication of out-of-range is displayed where the rock dust percent is less than 30%. Preferably, the rock dust percentage (RD%) is calculated from the predetermined relationship RD%=100+30 log .phi.. where the dust mixture initially includes moisture, the dust mixture is dried before measuring by use of 8 to 12 mesh molecular-sieves which are shaken with the dust mixture and subsequently screened from the dust mixture.
A Robust Bayesian Approach for Structural Equation Models with Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum; Xia, Ye-Mao
2008-01-01
In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…
Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs
ERIC Educational Resources Information Center
Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato
2007-01-01
This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
NASA Astrophysics Data System (ADS)
Sneath, P. H. A.
A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.
Toccalino, Patricia L.; Norman, Julia E.; Scott, Jonathon C.
2012-01-01
Chemical mixtures are prevalent in groundwater used for public water supply, but little is known about their potential health effects. As part of a large-scale ambient groundwater study, we evaluated chemical mixtures across multiple chemical classes, and included more chemical contaminants than in previous studies of mixtures in public-supply wells. We (1) assessed the occurrence of chemical mixtures in untreated source-water samples from public-supply wells, (2) determined the composition of the most frequently occurring mixtures, and (3) characterized the potential toxicity of mixtures using a new screening approach. The U.S. Geological Survey collected one untreated water sample from each of 383 public wells distributed across 35 states, and analyzed the samples for as many as 91 chemical contaminants. Concentrations of mixture components were compared to individual human-health benchmarks; the potential toxicity of mixtures was characterized by addition of benchmark-normalized component concentrations. Most samples (84%) contained mixtures of two or more contaminants, each at concentrations greater than one-tenth of individual benchmarks. The chemical mixtures that most frequently occurred and had the greatest potential toxicity primarily were composed of trace elements (including arsenic, strontium, or uranium), radon, or nitrate. Herbicides, disinfection by-products, and solvents were the most common organic contaminants in mixtures. The sum of benchmark-normalized concentrations was greater than 1 for 58% of samples, suggesting that there could be potential for mixtures toxicity in more than half of the public-well samples. Our findings can be used to help set priorities for groundwater monitoring and suggest future research directions for drinking-water treatment studies and for toxicity assessments of chemical mixtures in water resources.
Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants
NASA Astrophysics Data System (ADS)
Bernhoff, Niclas
2018-05-01
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.
Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru
2010-08-01
The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.
Chemometric methods for the simultaneous determination of some water-soluble vitamins.
Mohamed, Abdel-Maaboud I; Mohamed, Horria A; Mohamed, Niveen A; El-Zahery, Marwa R
2011-01-01
Two spectrophotometric methods, derivative and multivariate methods, were applied for the determination of binary, ternary, and quaternary mixtures of the water-soluble vitamins thiamine HCI (I), pyridoxine HCI (II), riboflavin (III), and cyanocobalamin (IV). The first method is divided into first derivative and first derivative of ratio spectra methods, and the second into classical least squares and principal components regression methods. Both methods are based on spectrophotometric measurements of the studied vitamins in 0.1 M HCl solution in the range of 200-500 nm for all components. The linear calibration curves were obtained from 2.5-90 microg/mL, and the correlation coefficients ranged from 0.9991 to 0.9999. These methods were applied for the analysis of the following mixtures: (I) and (II); (I), (II), and (III); (I), (II), and (IV); and (I), (II), (III), and (IV). The described methods were successfully applied for the determination of vitamin combinations in synthetic mixtures and dosage forms from different manufacturers. The recovery ranged from 96.1 +/- 1.2 to 101.2 +/- 1.0% for derivative methods and 97.0 +/- 0.5 to 101.9 +/- 1.3% for multivariate methods. The results of the developed methods were compared with those of reported methods, and gave good accuracy and precision.
Tang, Yongqiang
2018-04-30
The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Goodwin, Thomas J. (Inventor); Wolf, David A. (Inventor); Spaulding, Glenn F. (Inventor); Prewett, Tacey L. (Inventor)
1999-01-01
Normal mammalian tissue and the culturing process has been developed for the three groups of organ, structural and blood tissue. The cells are grown in vitro under micro- gravity culture conditions and form three dimensional cells aggregates with normal cell function. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel. The medium used for culturing the cells, especially a mixture of epithelial and mesenchymal cells contains a mixture of Mem-alpha and Leibovits L15 supplemented with glucose, galactose and fructose.
Systematic Proteomic Approach to Characterize the Impacts of ...
Chemical interactions have posed a big challenge in toxicity characterization and human health risk assessment of environmental mixtures. To characterize the impacts of chemical interactions on protein and cytotoxicity responses to environmental mixtures, we established a systems biology approach integrating proteomics, bioinformatics, statistics, and computational toxicology to measure expression or phosphorylation levels of 21 critical toxicity pathway regulators and 445 downstream proteins in human BEAS-28 cells treated with 4 concentrations of nickel, 2 concentrations each of cadmium and chromium, as well as 12 defined binary and 8 defined ternary mixtures of these metals in vitro. Multivariate statistical analysis and mathematical modeling of the metal-mediated proteomic response patterns showed a high correlation between changes in protein expression or phosphorylation and cellular toxic responses to both individual metals and metal mixtures. Of the identified correlated proteins, only a small set of proteins including HIF-1a is likely to be responsible for selective cytotoxic responses to different metals and metals mixtures. Furthermore, support vector machine learning was utilized to computationally predict protein responses to uncharacterized metal mixtures using experimentally generated protein response profiles corresponding to known metal mixtures. This study provides a novel proteomic approach for characterization and prediction of toxicities of
Kulcsár, Gyula
2009-02-01
Despite the substantial decline of the immune system in AIDS, only a few kinds of tumors increase in incidence. This shows that the immune system has no absolute role in the prevention of tumors. Therefore, the fact that tumors do not develop in the majority of the population during their lifetime indicates the existence of other defense system(s). According to our hypothesis, the defense is made by certain substances of the circulatory system. Earlier, on the basis of this hypothesis, we experimentally selected 16 substances of the circulatory system and demonstrated that the mixture of them (called active mixture) had a cytotoxic effect (inducing apoptosis) in vitro and in vivo on different tumor cell lines, but not on normal cells and animals. In this paper, we provide evidence that different cytostatic drugs or irradiation in combination with the active mixture killed significantly more cancer cells, compared with either treatments alone. The active mixture decreased, to a certain extent, the toxicity of cytostatics and irradiation on normal cells, but the most important result was that the active mixture destroyed the multidrug-resistant cells. Our results provide the possibility to improve the efficacy and reduce the side-effects of chemotherapy and radiation therapy and to prevent the relapse by killing the resistant cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less
Alcaráz, Mirta R; Vera-Candioti, Luciana; Culzoni, María J; Goicoechea, Héctor C
2014-04-01
This paper presents the development of a capillary electrophoresis method with diode array detector coupled to multivariate curve resolution-alternating least squares (MCR-ALS) to conduct the resolution and quantitation of a mixture of six quinolones in the presence of several unexpected components. Overlapping of time profiles between analytes and water matrix interferences were mathematically solved by data modeling with the well-known MCR-ALS algorithm. With the aim of overcoming the drawback originated by two compounds with similar spectra, a special strategy was implemented to model the complete electropherogram instead of dividing the data in the region as usually performed in previous works. The method was first applied to quantitate analytes in standard mixtures which were randomly prepared in ultrapure water. Then, tap water samples spiked with several interferences were analyzed. Recoveries between 76.7 and 125 % and limits of detection between 5 and 18 μg L(-1) were achieved.
NASA Astrophysics Data System (ADS)
Oh, Han Bin; Leach, Franklin E.; Arungundram, Sailaja; Al-Mafraji, Kanar; Venot, Andre; Boons, Geert-Jan; Amster, I. Jonathan
2011-03-01
The structural characterization of glycosaminoglycan (GAG) carbohydrates by mass spectrometry has been a long-standing analytical challenge due to the inherent heterogeneity of these biomolecules, specifically polydispersity, variability in sulfation, and hexuronic acid stereochemistry. Recent advances in tandem mass spectrometry methods employing threshold and electron-based ion activation have resulted in the ability to determine the location of the labile sulfate modification as well as assign the stereochemistry of hexuronic acid residues. To facilitate the analysis of complex electron detachment dissociation (EDD) spectra, principal component analysis (PCA) is employed to differentiate the hexuronic acid stereochemistry of four synthetic GAG epimers whose EDD spectra are nearly identical upon visual inspection. For comparison, PCA is also applied to infrared multiphoton dissociation spectra (IRMPD) of the examined epimers. To assess the applicability of multivariate methods in GAG mixture analysis, PCA is utilized to identify the relative content of two epimers in a binary mixture.
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin
2013-01-01
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin
2013-10-15
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.
On measures of association among genetic variables
Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner
2012-01-01
Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Almeida, Tiago P; Chu, Gavin S; Li, Xin; Dastagir, Nawshin; Tuan, Jiun H; Stafford, Peter J; Schlindwein, Fernando S; Ng, G André
2017-01-01
Purpose: Complex fractionated atrial electrograms (CFAE)-guided ablation after pulmonary vein isolation (PVI) has been used for persistent atrial fibrillation (persAF) therapy. This strategy has shown suboptimal outcomes due to, among other factors, undetected changes in the atrial tissue following PVI. In the present work, we investigate CFAE distribution before and after PVI in patients with persAF using a multivariate statistical model. Methods: 207 pairs of atrial electrograms (AEGs) were collected before and after PVI respectively, from corresponding LA regions in 18 persAF patients. Twelve attributes were measured from the AEGs, before and after PVI. Statistical models based on multivariate analysis of variance (MANOVA) and linear discriminant analysis (LDA) have been used to characterize the atrial regions and AEGs. Results: PVI significantly reduced CFAEs in the LA (70 vs. 40%; P < 0.0001). Four types of LA regions were identified, based on the AEGs characteristics: (i) fractionated before PVI that remained fractionated after PVI (31% of the collected points); (ii) fractionated that converted to normal (39%); (iii) normal prior to PVI that became fractionated (9%) and; (iv) normal that remained normal (21%). Individually, the attributes failed to distinguish these LA regions, but multivariate statistical models were effective in their discrimination ( P < 0.0001). Conclusion: Our results have unveiled that there are LA regions resistant to PVI, while others are affected by it. Although, traditional methods were unable to identify these different regions, the proposed multivariate statistical model discriminated LA regions resistant to PVI from those affected by it without prior ablation information.
heterogeneous mixture distributions for multi-source extreme rainfall
NASA Astrophysics Data System (ADS)
Ouarda, T.; Shin, J.; Lee, T. S.
2013-12-01
Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.
Combustion of Gaseous Mixtures
NASA Technical Reports Server (NTRS)
Duchene, R
1932-01-01
This report not only presents matters of practical importance in the classification of engine fuels, for which other means have proved inadequate, but also makes a few suggestions. It confirms the results of Withrow and Boyd which localize the explosive wave in the last portions of the mixture burned. This being the case, it may be assumed that the greater the normal combustion, the less the energy developed in the explosive form. In order to combat the detonation, it is therefore necessary to try to render the normal combustion swift and complete, as produced in carbureted mixtures containing benzene (benzol), in which the flame propagation, beginning at the spark, yields a progressive and pronounced darkening on the photographic film.
NASA Astrophysics Data System (ADS)
Jawad, Enas A.
2018-05-01
In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
1981-08-01
RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli
The effects of temperature on nitrous oxide and oxygen mixture homogeneity and stability.
Litwin, Patrick D
2010-10-15
For many long standing practices, the rationale for them is often lost as time passes. This is the situation with respect to the storage and handling of equimolar 50% nitrous oxide and 50% oxygen volume/volume (v/v) mixtures. A review was undertaken of existing literature to examine the developmental history of nitrous oxide and oxygen mixtures for anesthesia and analgesia and to ascertain if sufficient bibliographic data was available to support the position that the contents of a cylinder of a 50%/50% volume/volume (v/v) mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage and if justification could be found for the standard instructions given for handling before use. After ranking and removing duplicates, a total of fifteen articles were identified by the various search strategies and formed the basis of this literature review. Several studies were identified that confirmed that 50%/50% v/v mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage. The effect of temperature on the change of phase of the nitrous oxide in this mixture was further examined by several authors. These studies demonstrated that although it is possible to cause condensation and phase separation by cooling the cylinder, by allowing the cylinder to rewarm to room temperature for at least 48 hours, preferably in a horizontal orientation, and inverting it three times before use, the cylinder consistently delivered the proper proportions of the component gases as a homogenous mixture. The contents of a cylinder of a 50%/50% volume/volume (v/v) mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage. The standard instructions given for handling before are justified based on previously conducted studies.
Multivariate meta-analysis: a robust approach based on the theory of U-statistic.
Ma, Yan; Mazumdar, Madhu
2011-10-30
Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.
Bridges, N.J.; McCammon, R.B.
1980-01-01
DISCRIM is an interactive computer graphics program that dissects mixtures of normal or lognormal distributions. The program was written in an effort to obtain a more satisfactory solution to the dissection problem than that offered by a graphical or numerical approach alone. It combines graphic and analytic techniques using a Tektronix1 terminal in a time-share computing environment. The main program and subroutines were written in the FORTRAN language. ?? 1980.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-04-01
Three simple, specific, accurate and precise spectrophotometric methods were developed for the determination of cefprozil (CZ) in the presence of its alkaline induced degradation product (DCZ). The first method was the bivariate method, while the two other multivariate methods were partial least squares (PLS) and spectral residual augmented classical least squares (SRACLS). The multivariate methods were applied with and without variable selection procedure (genetic algorithm GA). These methods were tested by analyzing laboratory prepared mixtures of the above drug with its alkaline induced degradation product and they were applied to its commercial pharmaceutical products.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tomza, Paweł; Wrzeszcz, Władysław; Mazurek, Sylwester; Szostak, Roman; Czarnecki, Mirosław Antoni
2018-05-01
Here we report ATR-IR spectroscopic study on the separation at a molecular level (microheterogeneity) and the degree of deviation of H2O/CH3OH and H2O/CD3OH mixtures from the ideal mixture. Of particular interest is the effect of isotopic substitution in methyl group on molecular structure and interactions in both mixtures. To obtain comprehensive information from the multivariate data we applied the excess molar absorptivity spectra together with two-dimensional correlation analysis (2DCOS) and chemometric methods. In addition, the experimental results were compared and discussed with the structures of various model clusters obtained from theoretical (DFT) calculations. Our results evidence the presence of separation at a molecular level and deviation from the ideal mixture for both mixtures. The experimental and theoretical results show that the maximum of these deviations appears at equimolar mixture. Both mixtures consist of three kinds of species: homoclusters of water and methanol and mixed clusters (heteroclusters). The heteroclusters exist in the whole range of mole fractions with the maximum close to the equimolar mixture. At this mixture composition near 55-60% of molecules are involved in heteroclusters. In contrast, the homoclusters of water occur in a limited range of mole fractions (XME < 0.85-0.9). Upon mixing the molecules of methanol form weaker hydrogen bonding as compared with the pure alcohol. In contrast, the molecules of water in the mixture are involved in stronger hydrogen bonding than those in bulk water. All these results indicate that both mixtures have similar degree of deviation from the ideal mixture.
Tomza, Paweł; Wrzeszcz, Władysław; Mazurek, Sylwester; Szostak, Roman; Czarnecki, Mirosław Antoni
2018-05-15
Here we report ATR-IR spectroscopic study on the separation at a molecular level (microheterogeneity) and the degree of deviation of H 2 O/CH 3 OH and H 2 O/CD 3 OH mixtures from the ideal mixture. Of particular interest is the effect of isotopic substitution in methyl group on molecular structure and interactions in both mixtures. To obtain comprehensive information from the multivariate data we applied the excess molar absorptivity spectra together with two-dimensional correlation analysis (2DCOS) and chemometric methods. In addition, the experimental results were compared and discussed with the structures of various model clusters obtained from theoretical (DFT) calculations. Our results evidence the presence of separation at a molecular level and deviation from the ideal mixture for both mixtures. The experimental and theoretical results show that the maximum of these deviations appears at equimolar mixture. Both mixtures consist of three kinds of species: homoclusters of water and methanol and mixed clusters (heteroclusters). The heteroclusters exist in the whole range of mole fractions with the maximum close to the equimolar mixture. At this mixture composition near 55-60% of molecules are involved in heteroclusters. In contrast, the homoclusters of water occur in a limited range of mole fractions (X ME < 0.85-0.9). Upon mixing the molecules of methanol form weaker hydrogen bonding as compared with the pure alcohol. In contrast, the molecules of water in the mixture are involved in stronger hydrogen bonding than those in bulk water. All these results indicate that both mixtures have similar degree of deviation from the ideal mixture. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurapov, Denis; Reiss, Jennifer; Trinh, David H.
2007-07-15
Alumina thin films were deposited onto tempered hot working steel substrates from an AlCl{sub 3}-O{sub 2}-Ar-H{sub 2} gas mixture by plasma-assisted chemical vapor deposition. The normalized ion flux was varied during deposition through changes in precursor content while keeping the cathode voltage and the total pressure constant. As the precursor content in the total gas mixture was increased from 0.8% to 5.8%, the deposition rate increased 12-fold, while the normalized ion flux decreased by approximately 90%. The constitution, morphology, impurity incorporation, and the elastic properties of the alumina thin films were found to depend on the normalized ion flux. Thesemore » changes in structure, composition, and properties induced by normalized ion flux may be understood by considering mechanisms related to surface and bulk diffusion.« less
NASA Astrophysics Data System (ADS)
Belal, F.; Ibrahim, F.; Sheribah, Z. A.; Alaa, H.
2018-06-01
In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294 nm, 250 nm, 283 nm and 239 nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-05
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD=0.12], 0.67-23.19 [LOD=0.13] and 0.73-25.12 [LOD=0.15] μgmL(-1) for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Belal, F; Ibrahim, F; Sheribah, Z A; Alaa, H
2018-06-05
In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294nm, 250nm, 283nm and 239nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0μgmL -1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0μgmL -1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Illek, Beate; Lei, Dachuan; Fischer, Horst; Gruenert, Dieter C
2010-01-01
While the Cl(-) efflux assays are relatively straightforward, their ability to assess the efficacy of phenotypic correction in cystic fibrosis (CF) tissue or cells may be limited. Accurate assessment of therapeutic efficacy, i.e., correlating wild type CF transmembrane conductance regulator (CFTR) levels with phenotypic correction in tissue or individual cells, requires a sensitive assay. Radioactive chloride ((36)Cl) efflux was compared to Ussing chamber analysis for measuring cAMP-dependent Cl(-) transport in mixtures of human normal (16HBE14o-) and cystic fibrosis (CF) (CFTE29o- or CFBE41o-, respectively) airway epithelial cells. Cell mixtures with decreasing amounts of 16HBE14o- cells were evaluated. Efflux and Ussing chamber studies on mixed populations of normal and CF airway epithelial cells showed that, as the number of CF cells within the population was progressively increased, the cAMP-dependent Cl(-) decreased. The (36)Cl efflux assay was effective for measuring Cl(-) transport when ≥ 25% of the cells were normal. If < 25% of the cells were phenotypically wild-type (wt), the (36)Cl efflux assay was no longer reliable. Polarized CFBE41o- cells, also homozygous for the ΔF508 mutation, were used in the Ussing chamber studies. Ussing analysis detected cAMP-dependent Cl(-) currents in mixtures with ≥1% wild-type cells indicating that Ussing analysis is more sensitive than (36)Cl efflux analysis for detection of functional CFTR. Assessment of CFTR function by Ussing analysis is more sensitive than (36)Cl efflux analysis. Ussing analysis indicates that cell mixtures containing 10% 16HBE14o- cells showed 40-50% of normal cAMP-dependent Cl(-) transport that drops off exponentially between 10-1% wild-type cells. Copyright © 2010 S. Karger AG, Basel.
Differential models of twin correlations in skew for body-mass index (BMI).
Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric
2018-01-01
Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.
Fourier transform infrared spectroscopy for Kona coffee authentication.
Wang, Jun; Jun, Soojin; Bittenbender, H C; Gautz, Loren; Li, Qing X
2009-06-01
Kona coffee, the variety of "Kona typica" grown in the north and south districts of Kona-Island, carries a unique stamp of the region of Big Island of Hawaii, U.S.A. The excellent quality of Kona coffee makes it among the best coffee products in the world. Fourier transform infrared (FTIR) spectroscopy integrated with an attenuated total reflectance (ATR) accessory and multivariate analysis was used for qualitative and quantitative analysis of ground and brewed Kona coffee and blends made with Kona coffee. The calibration set of Kona coffee consisted of 10 different blends of Kona-grown original coffee mixture from 14 different farms in Hawaii and a non-Kona-grown original coffee mixture from 3 different sampling sites in Hawaii. Derivative transformations (1st and 2nd), mathematical enhancements such as mean centering and variance scaling, multivariate regressions by partial least square (PLS), and principal components regression (PCR) were implemented to develop and enhance the calibration model. The calibration model was successfully validated using 9 synthetic blend sets of 100% Kona coffee mixture and its adulterant, 100% non-Kona coffee mixture. There were distinct peak variations of ground and brewed coffee blends in the spectral "fingerprint" region between 800 and 1900 cm(-1). The PLS-2nd derivative calibration model based on brewed Kona coffee with mean centering data processing showed the highest degree of accuracy with the lowest standard error of calibration value of 0.81 and the highest R(2) value of 0.999. The model was further validated by quantitative analysis of commercial Kona coffee blends. Results demonstrate that FTIR can be a rapid alternative to authenticate Kona coffee, which only needs very quick and simple sample preparations.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M
2014-09-15
Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Metwally, Fadia H.
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.
NASA Astrophysics Data System (ADS)
Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Lotfy, Hayam Mahmoud; Shehata, Mostafa Abdel-Aty
2018-04-01
Three methods of analysis are conducted that need computational procedures by the Matlab® software. The first is the univariate mean centering method which eliminates the interfering signal of the one component at a selected wave length leaving the amplitude measured to represent the component of interest only. The other two multivariate methods named PLS and PCR depend on a large number of variables that lead to extraction of the maximum amount of information required to determine the component of interest in the presence of the other. Good accurate and precise results are obtained from the three methods for determining clotrimazole in the linearity range 1-12 μg/mL and 75-550 μg/mL with dexamethasone acetate 2-20 μg/mL in synthetic mixtures and pharmaceutical formulation using two different spectral regions 205-240 nm and 233-278 nm. The results obtained are compared statistically to each other and to the official methods.
Dyes assay for measuring physicochemical parameters.
Moczko, Ewa; Meglinski, Igor V; Bessant, Conrad; Piletsky, Sergey A
2009-03-15
A combination of selective fluorescent dyes has been developed for simultaneous quantitative measurements of several physicochemical parameters. The operating principle of the assay is similar to electronic nose and tongue systems, which combine nonspecific or semispecific elements for the determination of diverse analytes and chemometric techniques for multivariate data analysis. The analytical capability of the proposed mixture is engendered by changes in fluorescence signal in response to changes in environment such as pH, temperature, ionic strength, and presence of oxygen. The signal is detected by a three-dimensional spectrofluorimeter, and the acquired data are processed using an artificial neural network (ANN) for multivariate calibration. The fluorescence spectrum of a solution of selected dyes allows discreet reading of emission maxima of all dyes composing the mixture. The variations in peaks intensities caused by environmental changes provide distinctive fluorescence patterns which can be handled in the same way as the signals collected from nose/tongue electrochemical or piezoelectric devices. This optical system opens possibilities for rapid, inexpensive, real-time detection of a multitude of physicochemical parameters and analytes of complex samples.
NASA Astrophysics Data System (ADS)
Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.
2014-09-01
Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.
On an interface of the online system for a stochastic analysis of the varied information flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; MIREA, MGUPI; Kuzmin, Victor Yu.
The article describes a possible approach to the construction of an interface of an online asynchronous system that allows researchers to analyse varied information flows. The implemented stochastic methods are based on the mixture models and the method of moving separation of mixtures. The general ideas of the system functionality are demonstrated on an example for some moments of a finite normal mixture.
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel
2015-01-01
The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies.
STAMATE, MIRELA CRISTINA; TODOR, NICOLAE; COSGAREA, MARCEL
2015-01-01
Background and aim The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. Methods The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. Results We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Conclusion Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies. PMID:26733749
[Use of the Six Sigma methodology for the preparation of parenteral nutrition mixtures].
Silgado Bernal, M F; Basto Benítez, I; Ramírez García, G
2014-04-01
To use the tools of the Six Sigma methodology for the statistical control in the elaboration of parenteral nutrition mixtures at the critical checkpoint of specific density. Between August of 2010 and September of 2013, specific density analysis was performed to 100% of the samples, and the data were divided in two groups, adults and neonates. The percentage of acceptance, the trend graphs, and the sigma level were determined. A normality analysis was carried out by using the Shapiro Wilk test and the total percentage of mixtures within the specification limits was calculated. The specific density data between August of 2010 and September of 2013 comply with the normality test (W = 0.94) and show improvement in sigma level through time, reaching 6/6 in adults and 3.8/6 in neonates. 100% of the mixtures comply with the specification limits for adults and neonates, always within the control limits during the process. The improvement plans together with the Six Sigma methodology allow controlling the process, and warrant the agreement between the medical prescription and the content of the mixture. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398
NASA Astrophysics Data System (ADS)
Ytsma, Cai R.; Dyar, M. Darby
2018-01-01
Hydrogen (H) is a critical element to measure on the surface of Mars because its presence in mineral structures is indicative of past hydrous conditions. The Curiosity rover uses the laser-induced breakdown spectrometer (LIBS) on the ChemCam instrument to analyze rocks for their H emission signal at 656.6 nm, from which H can be quantified. Previous LIBS calibrations for H used small data sets measured on standards and/or manufactured mixtures of hydrous minerals and rocks and applied univariate regression to spectra normalized in a variety of ways. However, matrix effects common to LIBS make these calibrations of limited usefulness when applied to the broad range of compositions on the Martian surface. In this study, 198 naturally-occurring hydrous geological samples covering a broad range of bulk compositions with directly-measured H content are used to create more robust prediction models for measuring H in LIBS data acquired under Mars conditions. Both univariate and multivariate prediction models, including partial least square (PLS) and the least absolute shrinkage and selection operator (Lasso), are compared using several different methods for normalization of H peak intensities. Data from the ChemLIBS Mars-analog spectrometer at Mount Holyoke College are compared against spectra from the same samples acquired using a ChemCam-like instrument at Los Alamos National Laboratory and the ChemCam instrument on Mars. Results show that all current normalization and data preprocessing variations for quantifying H result in models with statistically indistinguishable prediction errors (accuracies) ca. ± 1.5 weight percent (wt%) H2O, limiting the applications of LIBS in these implementations for geological studies. This error is too large to allow distinctions among the most common hydrous phases (basalts, amphiboles, micas) to be made, though some clays (e.g., chlorites with ≈ 12 wt% H2O, smectites with 15-20 wt% H2O) and hydrated phases (e.g., gypsum with ≈ 20 wt% H2O) may be differentiated from lower-H phases within the known errors. Analyses of the H emission peak in Curiosity calibration targets and rock and soil targets on the Martian surface suggest that shot-to-shot variations of the ChemCam laser on Mars lead to variations in intensity that are comparable to those represented by the breadth of H standards tested in this study.
Hot spots of multivariate extreme anomalies in Earth observations
NASA Astrophysics Data System (ADS)
Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.
2016-12-01
Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.
Multivariate methods to visualise colour-space and colour discrimination data.
Hastings, Gareth D; Rubin, Alan
2015-01-01
Despite most modern colour spaces treating colour as three-dimensional (3-D), colour data is usually not visualised in 3-D (and two-dimensional (2-D) projection-plane segments and multiple 2-D perspective views are used instead). The objectives of this article are firstly, to introduce a truly 3-D percept of colour space using stereo-pairs, secondly to view colour discrimination data using that platform, and thirdly to apply formal statistics and multivariate methods to analyse the data in 3-D. This is the first demonstration of the software that generated stereo-pairs of RGB colour space, as well as of a new computerised procedure that investigated colour discrimination by measuring colour just noticeable differences (JND). An initial pilot study and thorough investigation of instrument repeatability were performed. Thereafter, to demonstrate the capabilities of the software, five colour-normal and one colour-deficient subject were examined using the JND procedure and multivariate methods of data analysis. Scatter plots of responses were meaningfully examined in 3-D and were useful in evaluating multivariate normality as well as identifying outliers. The extent and direction of the difference between each JND response and the stimulus colour point was calculated and appreciated in 3-D. Ellipsoidal surfaces of constant probability density (distribution ellipsoids) were fitted to response data; the volumes of these ellipsoids appeared useful in differentiating the colour-deficient subject from the colour-normals. Hypothesis tests of variances and covariances showed many statistically significant differences between the results of the colour-deficient subject and those of the colour-normals, while far fewer differences were found when comparing within colour-normals. The 3-D visualisation of colour data using stereo-pairs, as well as the statistics and multivariate methods of analysis employed, were found to be unique and useful tools in the representation and study of colour. Many additional studies using these methods along with the JND and other procedures have been identified and will be reported in future publications. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
USDA-ARS?s Scientific Manuscript database
Ultra-High performance liquid chromatography (UHPLC) with single wavelength (215 nm) detection was used to obtain chromatographic profiles of authentic skim milk powder (SMP) and synthetic mixtures of SMP with variable amounts of soy (SPI), pea (PPI), brown rice (BRP), and hydrolyzed wheat protein (...
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Investigation into the performance of different models for predicting stutter.
Bright, Jo-Anne; Curran, James M; Buckleton, John S
2013-07-01
In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Multivariate classification of infrared spectra of cell and tissue samples
Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.
1997-01-01
Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Goicoechea, H C; Olivieri, A C
1999-08-01
The use of multivariate spectrophotometric calibration is presented for the simultaneous determination of the active components of tablets used in the treatment of pulmonary tuberculosis. The resolution of ternary mixtures of rifampicin, isoniazid and pyrazinamide has been accomplished by using partial least squares (PLS-1) regression analysis. Although the components show an important degree of spectral overlap, they have been simultaneously determined with high accuracy and precision, rapidly and with no need of nonaqueous solvents for dissolving the samples. No interference has been observed from the tablet excipients. A comparison is presented with the related multivariate method of classical least squares (CLS) analysis, which is shown to yield less reliable results due to the severe spectral overlap among the studied compounds. This is highlighted in the case of isoniazid, due to the small absorbances measured for this component.
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
NASA Astrophysics Data System (ADS)
Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan
2013-01-01
The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.
VanWagner, Lisa B; Montag, Samantha; Zhao, Lihui; Allen, Norrina B; Lloyd-Jones, Donald M; Das, Arighno; Skaro, Anton I; Hohmann, Samuel; Friedewald, John J; Levitsky, Josh
2018-03-20
In the general population, even mild renal disease is associated with increased cardiovascular (CV) complications. Whether this is true in liver transplant recipients (LTR) is unknown. This was a retrospective cohort study of 671 LTR (2002-2012) from a large urban tertiary care center and 37,322 LTR using Vizient hospitalization data linked to the United Network for Organ Sharing. The MDRD4 equation estimated GFR (eGFR). Outcomes were 1-year CV complications (death/hospitalization from myocardial infarction, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism or stroke) and mortality. Latent mixture modeling identified trajectories in eGFR in the first LT year in the 671 patients. Mean(SD) eGFR was 72.1(45.7) ml/min/1.73m. Six distinct eGFR trajectories were identified in the local cohort (n=671): qualitatively Normal-Slow Decrease (4% of cohort), Normal-Rapid Decrease (4%), Mild-Stable (18%), Mild-Slow Decrease (35%), Moderate-Stable (30%), and Severe-Stable (9%). In multivariable analyses adjusted for confounders and baseline eGFR, the greatest odds of 1-year CV complications were in the Normal-Rapid Decrease group (OR, 95% CI: 10.6, 3.0-36.9). Among the national cohort, each 5-unit lower eGFR at LT was associated with a 2% and 5% higher hazard of all-cause and CV-mortality, respectively (p<0.0001) independent of multiple confounders. Even mild renal disease at the time of LT is a risk factor for posttransplant all-cause and CV mortality. More rapid declines in eGFR soon after LT correlate with risk of adverse CV outcomes, highlighting the need to study whether early renal preservation interventions also reduce CV complications.
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
DOT National Transportation Integrated Search
2001-07-01
This work pertains to preparation of concrete drying shrinkage data for proposed concrete mixtures during normal concrete : trial batch verification. Selected concrete mixtures will include PennDOT Classes AAA and AA and will also include the use of ...
Survival and growth of trees and shrubs on different lignite minesoils in Louisiana
James D. Haywood; Allan E. Tiarks; James P. Barnett
1993-01-01
In 1980, an experimental opencast lignite mine was developed to compare redistributed A horizon with three minesoil mixtures as growth media for woody plants. The three minesoil mixtures contained different amounts and types of overburden materials, and normal reclamation practices were followed. Loblolly pine (Pinus taeda, L.), sawtooth oak (
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
NASA Astrophysics Data System (ADS)
Quatela, Alessia; Gilmore, Adam M.; Steege Gall, Karen E.; Sandros, Marinella; Csatorday, Karoly; Siemiarczuk, Alex; (Ben Yang, Boqian; Camenen, Loïc
2018-04-01
We investigate the new simultaneous absorbance-transmission and fluorescence excitation-emission matrix method for rapid and effective characterization of the varying components from a mixture. The absorbance-transmission and fluorescence excitation-emission matrix method uniquely facilitates correction of fluorescence inner-filter effects to yield quantitative fluorescence spectral information that is largely independent of component concentration. This is significant because it allows one to effectively monitor quantitative component changes using multivariate methods and to generate and evaluate spectral libraries. We present the use of this novel instrument in different fields: i.e. tracking changes in complex mixtures including natural water, wine as well as monitoring stability and aggregation of hormones for biotherapeutics.
ERIC Educational Resources Information Center
Lix, Lisa M.; Algina, James; Keselman, H. J.
2003-01-01
The approximate degrees of freedom Welch-James (WJ) and Brown-Forsythe (BF) procedures for testing within-subjects effects in multivariate groups by trials repeated measures designs were investigated under departures from covariance homogeneity and normality. Empirical Type I error and power rates were obtained for least-squares estimators and…
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia
2016-02-01
A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.
Leung Tang, Pik; Alqassim, Mohammad; Nic Daéid, Niamh; Berlouis, Leonard; Seelenbinder, John
2016-05-01
Concrete is by far the world's most common construction material. Modern concrete is a mixture of industrial pozzolanic cement formulations and aggregate fillers. The former acts as the glue or binder in the final inorganic composite; however, when exposed to a fire the degree of concrete damage is often difficult to evaluate nondestructively. Fourier transform infrared (FT-IR) spectroscopy through techniques such as transmission, attenuated total reflectance, and diffuse reflectance have been rarely used to evaluate thermally damaged concrete. In this paper, we report on a study assessing the thermal damage of concrete via the use of a nondestructive handheld FT-IR with a diffuse reflectance sample interface. In situ measurements can be made on actual damaged areas, without the need for sample preparation. Separate multivariate models were developed to determine the equivalent maximal temperature endured for three common industrial concrete formulations. The concrete mixtures were successfully modeled displaying high predictive power as well as good specificity. This has potential uses in forensic investigation and remediation services particularly for fires in buildings. © The Author(s) 2016.
Gonzales, Gustavo F; Gonzales-Castañeda, Cynthia; Gasco, Manuel
2013-09-01
We investigated the effect of two extracts from Peruvian plants given alone or in a mixture on sperm count and glycemia in streptozotocin-diabetic mice. Normal or diabetic mice were divided in groups receiving vehicle, black maca (Lepidium meyenii), yacon (Smallanthus sonchifolius) or three mixtures of extracts black maca/yacon (90/10, 50/50 and 10/90%). Normal or diabetic mice were treated for 7 d with each extract, mixture or vehicle. Glycemia, daily sperm production (DSP), epididymal and vas deferens sperm counts in mice and polyphenol content, and antioxidant activity in each extract were assessed. Black maca (BM), yacon and the mixture of extracts reduced glucose levels in diabetic mice. Non-diabetic mice treated with BM and yacon showed higher DSP than those treated with vehicle (p < 0.05). Diabetic mice treated with BM, yacon and the mixture maca/yacon increased DSP, and sperm count in vas deferens and epididymis with respect to non-diabetic and diabetic mice treated with vehicle (p < 0.05). Yacon has 3.05 times higher polyphenol content than in maca, and this was associated with higher antioxidant activity. The combination of two extracts improved glycemic levels and male reproductive function in diabetic mice. Streptozotocin increased 1.43 times the liver weight that was reversed with the assessed plants extracts. In summary, streptozotocin-induced diabetes resulted in reduction in sperm counts and liver damage. These effects could be reduced with BM, yacon and the BM+yacon mixture.
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark Hilary; Mowry, Curtis Dale; Kotula, Paul Gabriel
Thermal decomposition of poly dimethyl siloxane compounds, Sylgard{reg_sign} 184 and 186, were examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a method of producing multiway data using a stepped thermal desorption. The technique involves sequentially heating a sample of the material of interest with subsequent analysis in a commercial GC/MS system. The decomposition chromatograms were analyzed using multivariate analysis tools including principal component analysis (PCA), factor rotation employing the varimax criterion, and multivariate curve resolution. The results of the analysis show seven components related to offgassing of various fractions of siloxanes that varymore » as a function of temperature. Thermal desorption coupled with gas chromatography-mass spectrometry (TD/GC-MS) is a powerful analytical technique for analyzing chemical mixtures. It has great potential in numerous analytic areas including materials analysis, sports medicine, in the detection of designer drugs; and biological research for metabolomics. Data analysis is complicated, far from automated and can result in high false positive or false negative rates. We have demonstrated a step-wise TD/GC-MS technique that removes more volatile compounds from a sample before extracting the less volatile compounds. This creates an additional dimension of separation before the GC column, while simultaneously generating three-way data. Sandia's proven multivariate analysis methods, when applied to these data, have several advantages over current commercial options. It also has demonstrated potential for success in finding and enabling identification of trace compounds. Several challenges remain, however, including understanding the sources of noise in the data, outlier detection, improving the data pretreatment and analysis methods, developing a software tool for ease of use by the chemist, and demonstrating our belief that this multivariate analysis will enable superior differentiation capabilities. In addition, noise and system artifacts challenge the analysis of GC-MS data collected on lower cost equipment, ubiquitous in commercial laboratories. This research has the potential to affect many areas of analytical chemistry including materials analysis, medical testing, and environmental surveillance. It could also provide a method to measure adsorption parameters for chemical interactions on various surfaces by measuring desorption as a function of temperature for mixtures. We have presented results of a novel method for examining offgas products of a common PDMS material. Our method involves utilizing a stepped TD/GC-MS data acquisition scheme that may be almost totally automated, coupled with multivariate analysis schemes. This method of data generation and analysis can be applied to a number of materials aging and thermal degradation studies.« less
Borazan, Hale; Sahin, Osman; Kececioglu, Ahmet; Uluer, M Selcuk; Et, Tayfun; Otelcioglu, Seref
2012-01-01
The pain on propofol injection is considered to be a common and difficult to eliminate problem in children. In this study, we aimed to compare the efficacy of pretreatment with tramadol 1 mg.kg(-1)and propofol-lidocaine 20 mg mixture for prevention of propofol induced pain in children. One hundred and twenty ASA I-II patients undergoing orthopedic and otolaryngological surgery were included in this study and were divided into three groups with random table numbers. Group C (n=39) received normal saline placebo and Group T (n=40) received 1 mg.kg(-1) tramadol 60 sec before propofol (180 mg 1% propofol with 2 ml normal saline) whereas Group L (n=40) received normal saline placebo before propofol-lidocaine mixture (180 mg 1% propofol with 2 ml %1 lidocaine). One patient in Group C was dropped out from the study because of difficulty in inserting an iv cannula. Thus, one hundred and nineteen patients were analyzed for the study. After given the calculated dose of propofol, a blinded observer assessed the pain with a four-point behavioral scale. There were no significant differences in patient characteristics and intraoperative variables (p>0.05) except intraoperative fentanyl consumption and analgesic requirement one hr after surgery among the groups (p<0.05). Both tramadol 1 mg.kg(-1) and lidocaine 20 mg mixture significantly reduced propofol pain when compared with control group. Moderate and severe pain were found higher in control group (p<0.05). The incidence of overall pain was 79.4% in the control group, 35% in tramadol group, 25% in lidocaine group respectively (p<0.001). Pretreatment with tramadol 60 sec before propofol injection and propofol-lidocaine mixture were significantly reduced propofol injection pain when compared to placebo in children.
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Falcaro, Milena; Pickles, Andrew
2007-02-10
We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Miller, C. G., III; Wilder, S. E.
1976-01-01
Equilibrium thermodynamic and flow properties are presented in tabulated and graphical form for moving, standing, and reflected normal shock waves into hydrogen-helium mixtures representative of postulated outer planet atmospheres. These results are presented in four volumes and the volmetric compositions of the mixtures are 0.95H2-0.05He in Volume 1, 0.90H2-0.10He in Volume 2, 0.85H2-0.15He in Volume 3, and 0.75H2-0.25He in Volume 4. Properties include pressure, temperature, density, enthalpy, speed of sound, entropy, molecular-weight ratio, isentropic exponent, velocity, and species mole fractions. Incident (moving) shock velocities are varied from 4 to 70 km/sec for a range of initial pressure of 5 N/sq m to 100 kN/sq m. Results are applicable to shock-tube flows and for determining flow conditions behind the normal portion of the bow shock about a blunt body at high velocities in postulated outer planet atmospheres. The document is a revised version of the original edition of NASA SP-3085 published in 1974.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.
In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less
Dayan, Michael; Hurtado Rúa, Sandra M.; Monohan, Elizabeth; Fujimoto, Kyoko; Pandya, Sneha; LoCastro, Eve M.; Vartanian, Tim; Nguyen, Thanh D.; Raj, Ashish; Gauthier, Susan A.
2017-01-01
A novel lesion-mask free method based on a gamma mixture model was applied to myelin water fraction (MWF) maps to estimate the association between cortical thickness and myelin content, and how it differs between relapsing-remitting (RRMS) and secondary-progressive multiple sclerosis (SPMS) groups (135 and 23 patients, respectively). It was compared to an approach based on lesion masks. The gamma mixture distribution of whole brain, white matter (WM) MWF was characterized with three variables: the mode (most frequent value) m1 of the gamma component shown to relate to lesion, the mode m2 of the component shown to be associated with normal appearing (NA) WM, and the mixing ratio (λ) between the two distributions. The lesion-mask approach relied on the mean MWF within lesion and within NAWM. A multivariate regression analysis was carried out to find the best predictors of cortical thickness for each group and for each approach. The gamma-mixture method was shown to outperform the lesion-mask approach in terms of adjusted R2, both for the RRMS and SPMS groups. The predictors of the final gamma-mixture models were found to be m1 (β = 1.56, p < 0.005), λ (β = −0.30, p < 0.0005) and age (β = −0.0031, p < 0.005) for the RRMS group (adjusted R2 = 0.16), and m2 (β = 4.72, p < 0.0005) for the SPMS group (adjusted R2 = 0.45). Further, a DICE coefficient analysis demonstrated that the lesion mask had more overlap to an ROI associated with m1, than to an ROI associated with m2 (p < 0.00001), and vice versa for the NAWM mask (p < 0.00001). These results suggest that during the relapsing phase, focal WM damage is associated with cortical thinning, yet in SPMS patients, global WM deterioration has a much stronger influence on secondary degeneration. Through these findings, we demonstrate the potential contribution of myelin loss on neuronal degeneration at different disease stages and the usefulness of our statistical reduction technique which is not affected by the typical bias associated with approaches based on lesion masks. PMID:28603479
ERIC Educational Resources Information Center
Grasman, Raoul P. P. P.; Huizenga, Hilde M.; Geurts, Hilde M.
2010-01-01
Crawford and Howell (1998) have pointed out that the common practice of z-score inference on cognitive disability is inappropriate if a patient's performance on a task is compared with relatively few typical control individuals. Appropriate univariate and multivariate statistical tests have been proposed for these studies, but these are only valid…
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupšys, P.
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Characterization and Separation of Cancer Cells with a Wicking Fiber Device.
Tabbaa, Suzanne M; Sharp, Julia L; Burg, Karen J L
2017-12-01
Current cancer diagnostic methods lack the ability to quickly, simply, efficiently, and inexpensively screen cancer cells from a mixed population of cancer and normal cells. Methods based on biomarkers are unreliable due to complexity of cancer cells, plasticity of markers, and lack of common tumorigenic markers. Diagnostics are time intensive, require multiple tests, and provide limited information. In this study, we developed a novel wicking fiber device that separates cancer and normal cell types. To the best of our knowledge, no previous work has used vertical wicking of cells through fibers to identify and isolate cancer cells. The device separated mouse mammary tumor cells from a cellular mixture containing normal mouse mammary cells. Further investigation showed the device separated and isolated human cancer cells from a heterogeneous mixture of normal and cancerous human cells. We report a simple, inexpensive, and rapid technique that has potential to identify and isolate cancer cells from large volumes of liquid samples that can be translated to on-site clinic diagnosis.
Internal combustion engine controls for reduced exhausts contaminants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, D.R. Jr.
1974-06-04
An electrochemical control system for achieving optimum efficiency in the catalytic conversion of hydrocarbon and carbon monoxide emissions from internal combustion engines is described. The system automatically maintains catalyst temperature at a point for maximum pollutant conversion by adjusting ignition timing and fuel/air ratio during warm-up and subsequent operation. Ignition timing is retarded during engine warm-up to bring the catalytic converter to an efficient operating temperature within a minimum period of time. After the converter reaches a predetermined minimum temperature, the spark is advanced to within its normal operating range. A needle-valve adjustment during warm-up is employed to enrich themore » fuel/air mixture by approximately 10 percent. Following warm-up and attainment of a predetermined catalyst temperature, the needle valve is moved automatically to its normal position (e.g., a fuel/air ratio of 16:1). Although the normal lean mixture causes increased amounts of nitrogen oxide emissions, present NO/sub x/ converters appear capable of handling the increased emissions under normal operating conditions.« less
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
Mun, Eun-Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2010-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of non-nested models using the Bayesian Information Criterion (BIC) to compare multiple models and identify the optimum number of clusters. The current study clustered 36 young men and women based on their baseline heart rate (HR) and HR variability (HRV), chronic alcohol use, and reasons for drinking. Two cluster groups were identified and labeled High Alcohol Risk and Normative groups. Compared to the Normative group, individuals in the High Alcohol Risk group had higher levels of alcohol use and more strongly endorsed disinhibition and suppression reasons for use. The High Alcohol Risk group showed significant HRV changes in response to positive and negative emotional and appetitive picture cues, compared to neutral cues. In contrast, the Normative group showed a significant HRV change only to negative cues. Findings suggest that the individuals with autonomic self-regulatory difficulties may be more susceptible to heavy alcohol use and use alcohol for emotional regulation. PMID:18331138
Das, Kiranmoy; Daniels, Michael J.
2014-01-01
Summary Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium and low) at baseline. PMID:24400941
Chakraborty, Somsubhra; Weindorf, David C; Li, Bin; Ali, Md Nasim; Majumdar, K; Ray, D P
2014-07-01
This pilot study compared penalized spline regression (PSR) and random forest (RF) regression using visible and near-infrared diffuse reflectance spectroscopy (VisNIR DRS) derived spectra of 164 petroleum contaminated soils after two different spectral pretreatments [first derivative (FD) and standard normal variate (SNV) followed by detrending] for rapid quantification of soil petroleum contamination. Additionally, a new analytical approach was proposed for the recovery of the pure spectral and concentration profiles of n-hexane present in the unresolved mixture of petroleum contaminated soils using multivariate curve resolution alternating least squares (MCR-ALS). The PSR model using FD spectra (r(2) = 0.87, RMSE = 0.580 log10 mg kg(-1), and residual prediction deviation = 2.78) outperformed all other models tested. Quantitative results obtained by MCR-ALS for n-hexane in presence of interferences (r(2) = 0.65 and RMSE 0.261 log10 mg kg(-1)) were comparable to those obtained using FD (PSR) model. Furthermore, MCR ALS was able to recover pure spectra of n-hexane. Copyright © 2014 Elsevier Ltd. All rights reserved.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering
2005-08-04
describe a four-band magnetic resonance image (MRI) consisting of 23,712 pixels of a brain with a tumor 2. Because of the size of the dataset, it is not...the Royal Statistical Society, Series B 56, 363–375. Figueiredo, M. A. T. and A. K. Jain (2002). Unsupervised learning of finite mixture models. IEEE...20 5.4 Brain MRI
New views of granular mass flows
Iverson, R.M.; Vallance, J.W.
2001-01-01
Concentrated grain-fluid mixtures in rock avalanches, debris flows, and pyroclastic flows do not behave as simple materials with fixed rheologies. Instead, rheology evolves as mixture agitation, grain concentration, and fluid-pressure change during flow initiation, transit, and deposition. Throughout a flow, however, normal forces on planes parallel to the free upper surface approximately balance the weight of the superincumbent mixture, and the Coulomb friction rule describes bulk intergranular shear stresses on such planes. Pore-fluid pressure can temporarily or locally enhance mixture mobility by reducing Coulomb friction and transferring shear stress to the fluid phase. Initial conditions, boundary conditions, and grain comminution and sorting can influence pore-fluid pressures and cause variations in flow dynamics and deposits.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
O'Sullivan, Finbarr; Muzi, Mark; Mankoff, David A; Eary, Janet F; Spence, Alexander M; Krohn, Kenneth A
2014-06-01
Most radiotracers used in dynamic positron emission tomography (PET) scanning act in a linear time-invariant fashion so that the measured time-course data are a convolution between the time course of the tracer in the arterial supply and the local tissue impulse response, known as the tissue residue function. In statistical terms the residue is a life table for the transit time of injected radiotracer atoms. The residue provides a description of the tracer kinetic information measurable by a dynamic PET scan. Decomposition of the residue function allows separation of rapid vascular kinetics from slower blood-tissue exchanges and tissue retention. For voxel-level analysis, we propose that residues be modeled by mixtures of nonparametrically derived basis residues obtained by segmentation of the full data volume. Spatial and temporal aspects of diagnostics associated with voxel-level model fitting are emphasized. Illustrative examples, some involving cancer imaging studies, are presented. Data from cerebral PET scanning with 18 F fluoro-deoxyglucose (FDG) and 15 O water (H2O) in normal subjects is used to evaluate the approach. Cross-validation is used to make regional comparisons between residues estimated using adaptive mixture models with more conventional compartmental modeling techniques. Simulations studies are used to theoretically examine mean square error performance and to explore the benefit of voxel-level analysis when the primary interest is a statistical summary of regional kinetics. The work highlights the contribution that multivariate analysis tools and life-table concepts can make in the recovery of local metabolic information from dynamic PET studies, particularly ones in which the assumptions of compartmental-like models, with residues that are sums of exponentials, might not be certain.
Use of antimüllerian hormone to predict the menopausal transition in HIV-infected women
Scherzer, Rebecca; Greenblatt, Ruth M.; Merhi, Zaher O.; Kassaye, Seble; Lambert-Messerlian, Geralyn; Maki, Pauline M.; Murphy, Kerry; Karim, Roksana; Bacchetti, Peter
2016-01-01
BACKGROUND HIV infection has been associated with early menopausal onset, which may have adverse long-term health consequences. Antimüllerian hormone, a biomarker of ovarian reserve and gonadal aging, is reduced in HIV-infected women. OBJECTIVE We sought to assess the relationship of antimüllerian hormone to age of menopause onset in HIV-infected women. STUDY DESIGN We used antimüllerian hormone levels measured in plasma in 2461 HIV-infected participants from the Women’s Interagency HIV Study to model the age at final menstrual period. Multivariable normal mixture models for censored data were used to identify factors associated with age at final menstrual period. RESULTS Higher antimüllerian hormone at age 40 years was associated with later age at final menstrual period, even after multivariable adjustment for smoking, CD4 cell count, plasma HIV RNA, hepatitis C infection, and history of clinical AIDS. Each doubling of antimüllerian hormone was associated with a 1.5-year increase in the age at final menstrual period. Median age at final menstrual period ranged from 45 years for those in the 10th percentile of antimüllerian hormone to 52 years for those in the 90th percentile. Other factors independently associated with earlier age at final menstrual period included smoking, hepatitis C infection, higher HIV RNA levels, and history of clinical AIDS. CONCLUSION Antimüllerian hormone is highly predictive of age at final menstrual period in HIV-infected women. Measuring antimüllerian hormone in HIV-infected women may enable clinicians to predict risk of early menopause, and potentially implement individualized treatment plans to prevent menopause-related comorbidities and to aid in interpretation of symptoms. PMID:27473002
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Breitholtz, Magnus; Nyholm, Jenny Rattfelt; Karlsson, Jenny; Andersson, Patrik L
2008-07-01
In aquatic ecosystems organisms are exposed to mixtures of pollutants. Still, risk assessment focuses almost exclusively on effect characterization of individual substances. The main objective of the current study was therefore to study mixture toxicity of a common group of industrial substances, i.e., brominated flame-retardants (BFRs), in the harpacticoid copepod Nitocra spinipes. Initially, 10 BFRs with high hydrophobicity but otherwise varying chemical characteristics were selected based on multivariate chemical characterization and tested individually for effects on mortality and development using a partial life cycle test (six days) where silica gel is used as a carrier of the hydrophobic substances. Based on these findings, six of the 10 BFRs were mixed in a series of NOEC proportions (which were set to 0.008, 0.04, 0.2, 1, and five times the NOEC concentrations for each individual BFR), loaded on silica gel and tested in a full life cycle test (26 days). Significantly increased mortality was observed in N. spinipes after six and 26 days exposure at a NOEC proportion that equals the NOEC LDR value (x1) for each BFR in the mixture (p=0.0015 and p=0.0105, respectively). At the NOECx5 proportion all animals were dead. None of the other NOEC proportions caused significant negative responses related to development and reproduction. This shows that low concentrations of individual substances can cause toxicity if exposed in mixtures, which highlights the need to consider mixture toxicity to a greater extent in regulatory work.
ERIC Educational Resources Information Center
Seco, Guillermo Vallejo; Izquierdo, Marcelino Cuesta; Garcia, M. Paula Fernandez; Diez, F. Javier Herrero
2006-01-01
The authors compare the operating characteristics of the bootstrap-F approach, a direct extension of the work of Berkovits, Hancock, and Nevitt, with Huynh's improved general approximation (IGA) and the Brown-Forsythe (BF) multivariate approach in a mixed repeated measures design when normality and multisample sphericity assumptions do not hold.…
Identification of offal adulteration in beef by laser induced breakdown spectroscopy (LIBS).
Velioglu, Hasan Murat; Sezer, Banu; Bilge, Gonca; Baytur, Süleyman Efe; Boyaci, Ismail Hakki
2018-04-01
Minced meat is the major ingredient in sausages, beef burgers, and similar products; and thus it is the main product subjected to adulteration with meat offal. Determination of this kind of meat adulteration is crucial due to religious, economic and ethical concerns. The aim of the present study is to discriminate the beef meat and offal samples by using laser induced breakdown spectroscopy (LIBS). To this end, LIBS and multivariate data analysis were used to discriminate pure beef and offal samples qualitatively and to determine the offal mixture adulteration quantitatively. In this analysis, meat samples were frozen and LIBS analysis were performed. The results indicate that by using principal component analysis (PCA), discrimination of pure offal and offal mixture adulterated beef samples can be achieved successfully. Besides, adulteration ratio can be determined using partial least square analysis method (PLS) with 0.947 coefficient of determination (R 2 ) and 3.8% of limit of detection (LOD) values for offal mixture adulterated beef samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kuchlyan, Jagannath; Banik, Debasis; Roy, Arpita; Kundu, Niloy; Sarkar, Nilmoni
2014-12-04
In this article we have investigated intermolecular excited-state proton transfer (ESPT) of firefly's chromophore D-luciferin in DMSO-water binary mixtures using steady-state and time-resolved fluorescence spectroscopy. The unusual behavior of DMSO-water binary mixture as reported by Bagchi et al. (J. Phys. Chem. B 2010, 114, 12875-12882) was also found using D-luciferin as intermolecular ESPT probe. The binary mixture has given evidence of its anomalous nature at low mole fractions of DMSO (below XD = 0.4) in our systematic investigation. Upon excitation of neutral D-luciferin molecule, dual fluorescence emissions (protonated and deprotonated form) are observed in DMSO-water binary mixture. A clear isoemissive point in the time-resolved area normalized emission spectra further indicates two emissive species in the excited state of D-luciferin in DMSO-water binary mixture. DMSO-water binary mixtures of different compositions are fascinating hydrogen bonding systems. Therefore, we have observed unusual changes in the fluorescence emission intensity, fluorescence quantum yield, and fluorescence lifetime of more hydrogen bonding sensitive anionic form of D-luciferin in low DMSO content of DMSO-water binary mixture.
Potyrailo, Radislav A
2016-10-12
Modern gas monitoring scenarios for medical diagnostics, environmental surveillance, industrial safety, and other applications demand new sensing capabilities. This Review provides analysis of development of new generation of gas sensors based on the multivariable response principles. Design criteria of these individual sensors involve a sensing material with multiresponse mechanisms to different gases and a multivariable transducer with independent outputs to recognize these different gas responses. These new sensors quantify individual components in mixtures, reject interferences, and offer more stable response over sensor arrays. Such performance is attractive when selectivity advantages of classic gas chromatography, ion mobility, and mass spectrometry instruments are canceled by requirements for no consumables, low power, low cost, and unobtrusive form factors for Internet of Things, Industrial Internet, and other applications. This Review is concluded with a perspective for future needs in fundamental and applied aspects of gas sensing and with the 2025 roadmap for ubiquitous gas monitoring.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Mixture quantification using PLS in plastic scintillation measurements.
Bagán, H; Tarancón, A; Rauret, G; García, J F
2011-06-01
This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ((241)Am, (137)Cs and (90)Sr/(90)Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mohamed, Ekram H; Lotfy, Hayam M; Hegazy, Maha A; Mowaka, Shereen
2017-05-25
Analysis of complex mixture containing three or more components represented a challenge for analysts. New smart spectrophotometric methods have been recently evolved with no limitation. A study of different novel and smart spectrophotometric techniques for resolution of severely overlapping spectra were presented in this work utilizing isosbestic points present in different absorption spectra, normalized spectra as a divisor and dual wavelengths. A quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PCT) and para-aminophenol (PAP) was taken as an example for application of the proposed techniques without any separation steps. The adopted techniques adopted of successive and progressive steps manipulating zero /or ratio /or derivative spectra. The proposed techniques includes eight novel and simple methods namely direct spectrophotometry after applying derivative transformation (DT) via multiplying by a decoding spectrum, spectrum subtraction (SS), advanced absorbance subtraction (AAS), advanced amplitude modulation (AAM), simultaneous derivative ratio (S 1 DD), advanced ratio difference (ARD), induced ratio difference (IRD) and finally double divisor-ratio difference-dual wavelength (DD-RD-DW) methods. The proposed methods were assessed by analyzing synthetic mixtures of the studied drugs. They were also successfully applied to commercial pharmaceutical formulations without interference from other dosage form additives. The methods were validated according to the ICH guidelines, accuracy, precision, repeatability, were found to be within the acceptable limits. The proposed procedures are accurate, simple and reproducible and yet economic. They are also sensitive and selective and could be used for routine analysis of complex most of the binary, ternary and quaternary mixtures and even more complex mixtures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Tapan; Das, B. P.; Pai, Ramesh V.
We present a scenario where a supersolid is induced in one of the components of a mixture of two species bosonic atoms where there are no long-range interactions. We study a system of normal and hard-core boson mixture with only the former possessing long-range interactions. We consider three cases: the first where the total density is commensurate and the other two where it is incommensurate to the lattice. By suitable choices of the densities of normal and hard-core bosons and the interaction strengths between them, we predict that the charge density wave and the supersolid orders can be induced inmore » the hard-core species as a result of the competing interatomic interactions.« less
Louys, Julien; Meloro, Carlo; Elton, Sarah; Ditchfield, Peter; Bishop, Laura C
2015-01-01
We test the performance of two models that use mammalian communities to reconstruct multivariate palaeoenvironments. While both models exploit the correlation between mammal communities (defined in terms of functional groups) and arboreal heterogeneity, the first uses a multiple multivariate regression of community structure and arboreal heterogeneity, while the second uses a linear regression of the principal components of each ecospace. The success of these methods means the palaeoenvironment of a particular locality can be reconstructed in terms of the proportions of heavy, moderate, light, and absent tree canopy cover. The linear regression is less biased, and more precisely and accurately reconstructs heavy tree canopy cover than the multiple multivariate model. However, the multiple multivariate model performs better than the linear regression for all other canopy cover categories. Both models consistently perform better than randomly generated reconstructions. We apply both models to the palaeocommunity of the Upper Laetolil Beds, Tanzania. Our reconstructions indicate that there was very little heavy tree cover at this site (likely less than 10%), with the palaeo-landscape instead comprising a mixture of light and absent tree cover. These reconstructions help resolve the previous conflicting palaeoecological reconstructions made for this site. Copyright © 2014 Elsevier Ltd. All rights reserved.
Field applications of stand-off sensing using visible/NIR multivariate optical computing
NASA Astrophysics Data System (ADS)
Eastwood, DeLyle; Soyemi, Olusola O.; Karunamuni, Jeevanandra; Zhang, Lixia; Li, Hongli; Myrick, Michael L.
2001-02-01
12 A novel multivariate visible/NIR optical computing approach applicable to standoff sensing will be demonstrated with porphyrin mixtures as examples. The ultimate goal is to develop environmental or counter-terrorism sensors for chemicals such as organophosphorus (OP) pesticides or chemical warfare simulants in the near infrared spectral region. The mathematical operation that characterizes prediction of properties via regression from optical spectra is a calculation of inner products between the spectrum and the pre-determined regression vector. The result is scaled appropriately and offset to correspond to the basis from which the regression vector is derived. The process involves collecting spectroscopic data and synthesizing a multivariate vector using a pattern recognition method. Then, an interference coating is designed that reproduces the pattern of the multivariate vector in its transmission or reflection spectrum, and appropriate interference filters are fabricated. High and low refractive index materials such as Nb2O5 and SiO2 are excellent choices for the visible and near infrared regions. The proof of concept has now been established for this system in the visible and will later be extended to chemicals such as OP compounds in the near and mid-infrared.
NASA Technical Reports Server (NTRS)
Miller, C. G., III; Wilder, S. E.
1974-01-01
Equilibrium thermodynamic and flow properties are presented in tabulated and graphical form for moving, standing, and reflected normal shock waves into helium-hydrogen mixtures representative of proposed outer planet atmospheres. The volumetric compositions of these mixtures are 0.35He-0.65H2, 0.20He-0.80H2, and 0.05He-0.95H2. Properties include pressure, temperature, density, enthalpy, speed of sound, entropy, molecular-weight ratio, isentropic exponent, velocity, and species mole fractions. Incident (moving) shock velocities are varied from 4 to 70 km/sec for a range of initial pressure of 5 N/sq m to 100 kN/sq m. The present results are applicable to shock-tube flows and to free-flight conditions for a blunt body at high velocities. A working chart illustrating idealized shock-tube performance with a 0.20He-0.80H2 test gas and heated helium driver gas is also presented.
Gharred, Tahar; Jebali, Jamel; Belgacem, Mariem; Mannai, Rabeb; Achour, Sami
2016-09-01
Multiple pollutions by trace metals and pharmaceuticals have become one of the most important problems in marine coastal areas because of its excessive toxicity on organisms living in this area. This study aimed to assess the individual and mixture toxicity of Cu, Cd, and oxytetracycline frequently existing in the contaminated marine areas and the embryo-larval development of the sea urchin Paracentrotus lividus. The individual contamination of the spermatozoid for 1 h with the increasing concentrations of Cd, Cu, and OTC decreases the fertility rate and increases larvae anomalies in the order Cu > Cd > OTC. Moreover, the normal larva frequency and the length of spicules were more sensitive than the fertilization rate and normal gastrula frequency endpoints. The mixture toxicity assessed by multiple experimental designs showed clearly that concentrations of Cd, Cu, and OTC superior to 338 μg/L, 0.56 μg/L, and 0.83 mg/L, respectively, cause significant larva malformations.
Ponce-Robles, L; Oller, I; Agüera, A; Trinidad-Lozano, M J; Yuste, F J; Malato, S; Perez-Estrada, L A
2018-08-15
Cork boiling wastewater is a very complex mixture of naturally occurring compounds leached and partially oxidized during the boiling cycles. The effluent generated is recalcitrant and could cause a significant environmental impact. Moreover, if this untreated industrial wastewater enters a municipal wastewater treatment plant it could hamper or reduce the efficiency of most activated sludge degradation processes. Despite the efforts to treat the cork boiling wastewater for reusing purposes, is still not well-known how safe these compounds (original compounds and oxidation by-products) will be. The purpose of this work was to apply an HPLC-high resolution mass spectrometry method and subsequent non-target screening using a multivariate analysis method (PCA), to explore relationships between samples (treatments) and spectral features (masses or compounds) that could indicate changes in formation, degradation or polarity, during coagulation/flocculation (C/F) and photo-Fenton (PhF). Although, most of the signal intensities were reduced after the treatment line, 16 and 4 new peaks were detected to be formed after C/F and PhF processes respectively. The use of this non-target approach showed to be an effective strategy to explore, classify and detect transformation products during the treatment of an unknown complex mixture. Copyright © 2018 Elsevier B.V. All rights reserved.
Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika
2011-02-15
A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.
Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira
2014-01-13
An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.
ACTIVE SUPPRESSION OF IMMUNOGLOBULIN ALLOTYPE SYNTHESIS
Herzenberg, Leonore A.; Chan, Eva L.; Ravitch, Myrnice M.; Riblet, Roy J.; Herzenberg, Leonard A.
1973-01-01
Thymus-derived cells (T cells) that actively suppress production of IgG2a immunoglobulins carrying the Ig-1b allotype have been found in adult (SJL x BALB/c)F1 mice exposed to anti-Ig-1b early in life. The suppression is specific for Ig-1b. The allelic product, Ig-1a, is unaffected. Spleen, lymph node, bone marrow, or thymus cells from suppressed mice suppress production of Ig-1b by syngeneic spleen cells from normal F1 mice. When a mixture of suppressed and normal cells is transferred into lethally irradiated BALB/c mice, there is a short burst of Ig-1b production after which Ig-1b levels in the recipient fall rapidly below detectability. Pretreatment of the cells from the suppressed mice with antiserum specific for T cells (anti-Thy-1b) plus complement before mixture destroys the suppressing activity. Similar results with suppressor cells were obtained in vitro using Mishell-Dutton cultures. Mixture of spleen cells from suppressed animals with sheep erythrocyte (SRBC)-primed syngeneic normal spleen before culture suppresses Ig-1b plaque-forming cell (PFC) formation while leaving Ig-1a PFC unaffected. Treatment of the suppressed spleen with anti-Thy-1b before transfer removes the suppressing activity. PMID:4541122
Multivariate multiscale entropy of financial markets
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun
2017-11-01
In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.
Mujica Ascencio, Saul; Choe, ChunSik; Meinke, Martina C; Müller, Rainer H; Maksimov, George V; Wigger-Alberti, Walter; Lademann, Juergen; Darvin, Maxim E
2016-07-01
Propylene glycol is one of the known substances added in cosmetic formulations as a penetration enhancer. Recently, nanocrystals have been employed also to increase the skin penetration of active components. Caffeine is a component with many applications and its penetration into the epidermis is controversially discussed in the literature. In the present study, the penetration ability of two components - caffeine nanocrystals and propylene glycol, applied topically on porcine ear skin in the form of a gel, was investigated ex vivo using two confocal Raman microscopes operated at different excitation wavelengths (785nm and 633nm). Several depth profiles were acquired in the fingerprint region and different spectral ranges, i.e., 526-600cm(-1) and 810-880cm(-1) were chosen for independent analysis of caffeine and propylene glycol penetration into the skin, respectively. Multivariate statistical methods such as principal component analysis (PCA) and linear discriminant analysis (LDA) combined with Student's t-test were employed to calculate the maximum penetration depths of each substance (caffeine and propylene glycol). The results show that propylene glycol penetrates significantly deeper than caffeine (20.7-22.0μm versus 12.3-13.0μm) without any penetration enhancement effect on caffeine. The results confirm that different substances, even if applied onto the skin as a mixture, can penetrate differently. The penetration depths of caffeine and propylene glycol obtained using two different confocal Raman microscopes are comparable showing that both types of microscopes are well suited for such investigations and that multivariate statistical PCA-LDA methods combined with Student's t-test are very useful for analyzing the penetration of different substances into the skin. Copyright © 2016 Elsevier B.V. All rights reserved.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Li, Xue; Zhao, Shuying; Zhang, Shuxiang; Kim, Dong Ha; Knoll, Wolfgang
2007-06-19
Inorganic compound HAuCl4, which can form a complex with pyridine, is introduced into a poly(styrene-block-2-vinylpyridine) (PS-b-P2VP) block copolymer/poly(methyl methacrylate) (PMMA) homopolymer mixture. The orientation of the cylindrical microdomains formed by the P2VP block, PMMA, and HAuCl4 normal to the substrate surface can be generated via cooperative self-assembly of the mixture. Selective removal of the homopolymer can lead to porous nanostructures containing metal components in P2VP domains, which have a novel photoluminescence property.
Flame Speeds and Energy Considerations for Explosions in a Spherical Bomb
NASA Technical Reports Server (NTRS)
Fiock, Ernest F; Marvin, Charles F , Jr; Caldwell, Frank R; Roeder, Carl H
1940-01-01
Simultaneous measurements were made of the speed of flame and the rise in pressure during explosions of mixtures of carbon monoxide, normal heptane, iso-octane, and benzene in a 10-inch spherical bomb with central ignition. From these records, fundamental properties of the explosive mixtures, which are independent of the apparatus, were computed. The transformation velocity, or speed at which flame advances into and transforms the explosive mixture, increases with both the temperature and the pressure of the unburned gas. The rise in pressure was correlated with the mass of charge inflamed to show the course of the energy developed.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
Borazan, Hale; Sahin, Osman; Kececioglu, Ahmet; Uluer, M.Selcuk; Et, Tayfun; Otelcioglu, Seref
2012-01-01
Background: The pain on propofol injection is considered to be a common and difficult to eliminate problem in children. In this study, we aimed to compare the efficacy of pretreatment with tramadol 1 mg.kg-1and propofol-lidocaine 20 mg mixture for prevention of propofol induced pain in children. Methods: One hundred and twenty ASA I-II patients undergoing orthopedic and otolaryngological surgery were included in this study and were divided into three groups with random table numbers. Group C (n=39) received normal saline placebo and Group T (n=40) received 1 mg.kg-1 tramadol 60 sec before propofol (180 mg 1% propofol with 2 ml normal saline) whereas Group L (n=40) received normal saline placebo before propofol-lidocaine mixture (180 mg 1% propofol with 2 ml %1 lidocaine). One patient in Group C was dropped out from the study because of difficulty in inserting an iv cannula. Thus, one hundred and nineteen patients were analyzed for the study. After given the calculated dose of propofol, a blinded observer assessed the pain with a four-point behavioral scale. Results: There were no significant differences in patient characteristics and intraoperative variables (p>0.05) except intraoperative fentanyl consumption and analgesic requirement one hr after surgery among the groups (p<0.05). Both tramadol 1 mg.kg-1 and lidocaine 20 mg mixture significantly reduced propofol pain when compared with control group. Moderate and severe pain were found higher in control group (p<0.05). The incidence of overall pain was 79.4% in the control group, 35% in tramadol group, 25% in lidocaine group respectively (p<0.001). Conclusions: Pretreatment with tramadol 60 sec before propofol injection and propofol-lidocaine mixture were significantly reduced propofol injection pain when compared to placebo in children. PMID:22927775
Toxicity pathways have been defined as normal cellular pathways that, when sufficiently perturbed as a consequence of chemical exposure, lead to an adverse outcome. If an exposure alters one or more normal biological pathways to an extent that leads to an adverse toxicity outcome...
In vitro screening for population variability in toxicity of pesticide-containing mixtures
Abdo, Nour; Wetmore, Barbara A.; Chappell, Grace A.; Shea, Damian; Wright, Fred A.; Rusyna, Ivan
2016-01-01
Population-based human in vitro models offer exceptional opportunities for evaluating the potential hazard and mode of action of chemicals, as well as variability in responses to toxic insults among individuals. This study was designed to test the hypothesis that comparative population genomics with efficient in vitro experimental design can be used for evaluation of the potential for hazard, mode of action, and the extent of population variability in responses to chemical mixtures. We selected 146 lymphoblast cell lines from 4 ancestrally and geographically diverse human populations based on the availability of genome sequence and basal RNA-seq data. Cells were exposed to two pesticide mixtures – an environmental surface water sample comprised primarily of organochlorine pesticides and a laboratory-prepared mixture of 36 currently used pesticides – in concentration response and evaluated for cytotoxicity. On average, the two mixtures exhibited a similar range of in vitro cytotoxicity and showed considerable inter-individual variability across screened cell lines. However, when in vitroto-in vivo extrapolation (IVIVE) coupled with reverse dosimetry was employed to convert the in vitro cytotoxic concentrations to oral equivalent doses and compared to the upper bound of predicted human exposure, we found that a nominally more cytotoxic chlorinated pesticide mixture is expected to have greater margin of safety (more than 5 orders of magnitude) as compared to the current use pesticide mixture (less than 2 orders of magnitude) due primarily to differences in exposure predictions. Multivariate genome-wide association mapping revealed an association between the toxicity of current use pesticide mixture and a polymorphism in rs1947825 in C17orf54. We conclude that a combination of in vitro human population-based cytotoxicity screening followed by dosimetric adjustment and comparative population genomics analyses enables quantitative evaluation of human health hazard from complex environmental mixtures. Additionally, such an approach yields testable hypotheses regarding potential toxicity mechanisms. PMID:26386728
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Hilscher, Moira; Enders, Felicity B; Carey, Elizabeth J; Lindor, Keith D; Tabibian, James H
2016-01-01
Introduction. Recent studies suggest that serum alkaline phosphatase may represent a prognostic biomarker in patients with primary sclerosing cholangitis. However, this association remains poorly understood. Therefore, the aim of this study was to investigate the prognostic significance and clinical correlates of alkaline phosphatase normalization in primary sclerosing cholangitis. This was a retrospective cohort study of patients with a new diagnosis of primary sclerosing cholangitis made at an academic medical center. The primary endpoint was time to hepatobiliaryneoplasia, liver transplantation, or liver-related death. Secondary endpoints included occurrence of and time to alkaline phosphatase normalization. Patients who did and did not achieve normalization were compared with respect to clinical characteristics and endpoint-free survival, and the association between normalization and the primary endpoint was assessed with univariate and multivariate Cox proportional-hazards analyses. Eighty six patients were included in the study, with a total of 755 patient-years of follow-up. Thirty-eight patients (44%) experienced alkaline phosphatase normalization within 12 months of diagnosis. Alkaline phosphatase normalization was associated with longer primary endpoint-free survival (p = 0.0032) and decreased risk of requiring liver transplantation (p = 0.033). Persistent normalization was associated with even fewer adverse endpoints as well as longer survival. In multivariate analyses, alkaline phosphatase normalization (adjusted hazard ratio 0.21, p = 0.012) and baseline bilirubin (adjusted hazard ratio 4.87, p = 0.029) were the only significant predictors of primary endpoint-free survival. Alkaline phosphatase normalization, particularly if persistent, represents a robust biomarker of improved long-term survival and decreased risk of requiring liver transplantation in patients with primary sclerosing cholangitis.
Comparison of Two Procedures for Analyzing Small Sets of Repeated Measures Data
ERIC Educational Resources Information Center
Vallejo, Guillermo; Livacic-Rojas, Pablo
2005-01-01
This article compares two methods for analyzing small sets of repeated measures data under normal and non-normal heteroscedastic conditions: a mixed model approach with the Kenward-Roger correction and a multivariate extension of the modified Brown-Forsythe (BF) test. These procedures differ in their assumptions about the covariance structure of…
Effect of the addition of rocuronium to 2% lignocaine in peribulbar block for cataract surgery.
Patil, Vishalakshi; Farooqy, Allauddin; Chaluvadi, Balaraju Thayappa; Rajashekhar, Vinayak; Malshetty, Ashwini
2017-01-01
Peribulbar anesthesia is associated with delayed orbital akinesia compared with retrobulbar anesthesia. To test the hypothesis that rocuronium added to a mixture of local anesthetics (LAs) could improve speed of onset of akinesia in peribulbar block (PB), we designed this study. This study examined the effects of adding rocuronium 5 mg to 2% lignocaine with adrenaline to note orbital and eyelid akinesia in patients undergoing cataract surgery. In a prospective, randomized, double-blind study, 100 patients were equally randomized to receive a mixture of 0.5 ml normal saline, 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group I), a mixture of rocuronium 0.5 ml (5 mg), 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group II). Orbital akinesia was assessed on a 0-8 score (0 = no movement, 8 = normal) at 2 min intervals for 10 min. Time to adequate anesthesia was also recorded. Results are presented as mean ± standard deviation. Rocuronium group demonstrated significantly better akinesia scores than control group at 2 min intervals post-PB (significant P value obtained). No significant complications were recorded. Rocuronium added to a mixture of LA improved the quality of akinesia in PB and reduced the need for supplementary injections. The addition of rocuronium 5 mg to a mixture of lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml shortened the onset time of peribulbar anesthesia in patients undergoing cataract surgery without causing adverse effects.
Divya, O; Mishra, Ashok K
2007-05-29
Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.
Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J
2017-06-01
Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.
Liu, Huang; Pan, Yong; Liu, Bei; Sun, Changyu; Guo, Ping; Gao, Xueteng; Yang, Lanying; Ma, Qinglan; Chen, Guangjin
2016-01-01
Separation of low boiling gas mixtures is widely concerned in process industries. Now their separations heavily rely upon energy-intensive cryogenic processes. Here, we report a pseudo-absorption process for separating low boiling gas mixtures near normal temperature. In this process, absorption-membrane-adsorption is integrated by suspending suitable porous ZIF material in suitable solvent and forming selectively permeable liquid membrane around ZIF particles. Green solvents like water and glycol were used to form ZIF-8 slurry and tune the permeability of liquid membrane surrounding ZIF-8 particles. We found glycol molecules form tighter membrane while water molecules form looser membrane because of the hydrophobicity of ZIF-8. When using mixing solvents composed of glycol and water, the permeability of liquid membrane becomes tunable. It is shown that ZIF-8/water slurry always manifests remarkable higher separation selectivity than solid ZIF-8 and it could be tuned to further enhance the capture of light hydrocarbons by adding suitable quantity of glycol to water. Because of its lower viscosity and higher sorption/desorption rate, tunable ZIF-8/water-glycol slurry could be readily used as liquid absorbent to separate different kinds of low boiling gas mixtures by applying a multistage separation process in one traditional absorption tower, especially for the capture of light hydrocarbons. PMID:26892255
Razus, D; Brinzea, V; Mitu, M; Movileanu, C; Oancea, D
2011-06-15
The maximum rates of pressure rise during closed vessel explosions of propane-air mixtures are reported, for systems with various initial concentrations, pressures and temperatures ([C(3)H(8)]=2.50-6.20 vol.%, p(0)=0.3-1.3 bar; T(0)=298-423 K). Experiments were performed in a spherical vessel (Φ=10 cm) with central ignition. The deflagration (severity) index K(G), calculated from experimental values of maximum rates of pressure rise is examined against the adiabatic deflagration index, K(G, ad), computed from normal burning velocities and peak explosion pressures. At constant temperature and fuel/oxygen ratio, both the maximum rates of pressure rise and the deflagration indices are linear functions of total initial pressure, as reported for other fuel-air mixtures. At constant initial pressure and composition, the maximum rates of pressure rise and deflagration indices are slightly influenced by the initial temperature; some influence of the initial temperature on maximum rates of pressure rise is observed only for propane-air mixtures far from stoichiometric composition. The differentiated temperature influence on the normal burning velocities and the peak explosion pressures might explain this behaviour. Copyright © 2011 Elsevier B.V. All rights reserved.
Drunk driving detection based on classification of multivariate time series.
Li, Zhenlong; Jin, Xue; Zhao, Xiaohua
2015-09-01
This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-01
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.
Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom
2018-07-05
A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.
Karr, Justin E; Garcia-Barrera, Mauricio A; Holdnack, James A; Iverson, Grant L
2018-01-01
Multivariate base rates allow for the simultaneous statistical interpretation of multiple test scores, quantifying the normal frequency of low scores on a test battery. This study provides multivariate base rates for the Delis-Kaplan Executive Function System (D-KEFS). The D-KEFS consists of 9 tests with 16 Total Achievement scores (i.e. primary indicators of executive function ability). Stratified by education and intelligence, multivariate base rates were derived for the full D-KEFS and an abbreviated four-test battery (i.e. Trail Making, Color-Word Interference, Verbal Fluency, and Tower Test) using the adult portion of the normative sample (ages 16-89). Multivariate base rates are provided for the full and four-test D-KEFS batteries, calculated using five low score cutoffs (i.e. ≤25th, 16th, 9th, 5th, and 2nd percentiles). Low scores occurred commonly among the D-KEFS normative sample, with 82.6 and 71.8% of participants obtaining at least one score ≤16th percentile for the full and four-test batteries, respectively. Intelligence and education were inversely related to low score frequency. The base rates provided herein allow clinicians to interpret multiple D-KEFS scores simultaneously for the full D-KEFS and an abbreviated battery of commonly administered tests. The use of these base rates will support clinicians when differentiating between normal variations in cognitive performance and true executive function deficits.
Stewart analysis of apparently normal acid-base state in the critically ill.
Moviat, Miriam; van den Boogaard, Mark; Intven, Femke; van der Voort, Peter; van der Hoeven, Hans; Pickkers, Peter
2013-12-01
This study aimed to describe Stewart parameters in critically ill patients with an apparently normal acid-base state and to determine the incidence of mixed metabolic acid-base disorders in these patients. We conducted a prospective, observational multicenter study of 312 consecutive Dutch intensive care unit patients with normal pH (7.35 ≤ pH ≤ 7.45) on days 3 to 5. Apparent (SIDa) and effective strong ion difference (SIDe) and strong ion gap (SIG) were calculated from 3 consecutive arterial blood samples. Multivariate linear regression analysis was performed to analyze factors potentially associated with levels of SIDa and SIG. A total of 137 patients (44%) were identified with an apparently normal acid-base state (normal pH and -2 < base excess < 2 and 35 < PaCO2 < 45 mm Hg). In this group, SIDa values were 36.6 ± 3.6 mEq/L, resulting from hyperchloremia (109 ± 4.6 mEq/L, sodium-chloride difference 30.0 ± 3.6 mEq/L); SIDe values were 33.5 ± 2.3 mEq/L, resulting from hypoalbuminemia (24.0 ± 6.2 g/L); and SIG values were 3.1 ± 3.1 mEq/L. During admission, base excess increased secondary to a decrease in SIG levels and, subsequently, an increase in SIDa levels. Levels of SIDa were associated with positive cation load, chloride load, and admission SIDa (multivariate r(2) = 0.40, P < .001). Levels of SIG were associated with kidney function, sepsis, and SIG levels at intensive care unit admission (multivariate r(2) = 0.28, P < .001). Intensive care unit patients with an apparently normal acid-base state have an underlying mixed metabolic acid-base disorder characterized by acidifying effects of a low SIDa (caused by hyperchloremia) and high SIG combined with the alkalinizing effect of hypoalbuminemia. © 2013.
Robust tests for multivariate factorial designs under heteroscedasticity.
Vallejo, Guillermo; Ato, Manuel
2012-06-01
The question of how to analyze several multivariate normal mean vectors when normality and covariance homogeneity assumptions are violated is considered in this article. For the two-way MANOVA layout, we address this problem adapting results presented by Brunner, Dette, and Munk (BDM; 1997) and Vallejo and Ato (modified Brown-Forsythe [MBF]; 2006) in the context of univariate factorial and split-plot designs and a multivariate version of the linear model (MLM) to accommodate heterogeneous data. Furthermore, we compare these procedures with the Welch-James (WJ) approximate degrees of freedom multivariate statistics based on ordinary least squares via Monte Carlo simulation. Our numerical studies show that of the methods evaluated, only the modified versions of the BDM and MBF procedures were robust to violations of underlying assumptions. The MLM approach was only occasionally liberal, and then by only a small amount, whereas the WJ procedure was often liberal if the interactive effects were involved in the design, particularly when the number of dependent variables increased and total sample size was small. On the other hand, it was also found that the MLM procedure was uniformly more powerful than its most direct competitors. The overall success rate was 22.4% for the BDM, 36.3% for the MBF, and 45.0% for the MLM.
Multivariate analysis of cytokine profiles in pregnancy complications.
Azizieh, Fawaz; Dingle, Kamaludin; Raghupathy, Raj; Johnson, Kjell; VanderPlas, Jacob; Ansari, Ali
2018-03-01
The immunoregulation to tolerate the semiallogeneic fetus during pregnancy includes a harmonious dynamic balance between anti- and pro-inflammatory cytokines. Several earlier studies reported significantly different levels and/or ratios of several cytokines in complicated pregnancy as compared to normal pregnancy. However, as cytokines operate in networks with potentially complex interactions, it is also interesting to compare groups with multi-cytokine data sets, with multivariate analysis. Such analysis will further examine how great the differences are, and which cytokines are more different than others. Various multivariate statistical tools, such as Cramer test, classification and regression trees, partial least squares regression figures, 2-dimensional Kolmogorov-Smirmov test, principal component analysis and gap statistic, were used to compare cytokine data of normal vs anomalous groups of different pregnancy complications. Multivariate analysis assisted in examining if the groups were different, how strongly they differed, in what ways they differed and further reported evidence for subgroups in 1 group (pregnancy-induced hypertension), possibly indicating multiple causes for the complication. This work contributes to a better understanding of cytokines interaction and may have important implications on targeting cytokine balance modulation or design of future medications or interventions that best direct management or prevention from an immunological approach. © 2018 The Authors. American Journal of Reproductive Immunology Published by John Wiley & Sons Ltd.
Ping, Lifang; Huang, Lihong; Cardinali, Barbara; Profumo, Aldo; Gorkun, Oleg V.; Lord, Susan T.
2011-01-01
Fibrin polymerization occurs in two steps: the assembly of fibrin monomers into protofibrils and the lateral aggregation of protofibrils into fibers. Here we describe a novel fibrinogen that apparently impairs only lateral aggregation. This variant is a hybrid, where the human αC region has been replaced with the homologous chicken region. Several experiments indicate this hybrid human-chicken (HC) fibrinogen has an overall structure similar to normal. Thrombin-catalyzed fibrinopeptide release from HC fibrinogen was normal. Plasmin digests of HC fibrinogen produced fragments that were similar to normal D and E; further, as with normal fibrinogen, the knob ‘A’ peptide, GPRP, reversed the plasmin cleavage associated with addition of EDTA. Dynamic light scattering and turbidity studies with HC fibrinogen showed polymerization was not normal. Whereas early small increases in hydrodynamic radius and absorbance paralleled the increases seen during the assembly of normal protofibrils, HC fibrinogen showed no dramatic increase in scattering as observed with normal lateral aggregation. To determine whether HC and normal fibrinogen could form a copolymer, we examined mixtures of these. Polymerization of normal fibrinogen was markedly changed by HC fibrinogen, as expected for mixed polymers. When the mixture contained 0.45 μM normal and 0.15 M HC fibrinogen, the initiation of lateral aggregation was delayed and the final fiber size was reduced relative to normal fibrinogen at 0.45 μM. Considered altogether our data suggest that HC fibrin monomers can assemble into protofibrils or protofibril-like structures but these either cannot assemble into fibers or assemble into very thin fibers. PMID:21932842
NASA Astrophysics Data System (ADS)
Hasan, Mohd Rosli Mohd; Hamzah, Meor Othman; Yee, Teh Sek
2017-10-01
Experimental works were conducted to evaluate the properties of asphalt binders and mixtures produced using a relatively new silane additive, named ZycoTherm. In this study, 0.1wt% ZycoTherm was blended with asphalt binder to enable production of asphalt mixture at lower than normal temperatures, as well as improve mix workability and compactability. Asphalt mixture performances towards pavement distresses in tropical climate region were also investigated. The properties of control asphalt binders (60/70 and 80/10 penetration grade) and asphalt binders incorporating 0.1% ZycoTherm were reported based on the penetration, softening point, rotational viscosity, complex modulus and phase angle. Subsequently, to compare the performance of asphalt mixture incorporating ZycoTherm with the control asphalt mixture, cylindrical samples were prepared at recommended temperatures and air voids depending on the binder types and test requirements. The samples were tested for indirect tensile strength (ITS), resilient modulus, dynamic creep, Hamburg wheel tracking and moisture induced damage. From compaction data using the Servopak gyratory compactor, specimen prepared using ZycoTherm exhibit higher workability and compactability compared to the conventional mixture. From the mixture performance test results, mixtures prepared with ZycoTherm showed comparable if not better performance than the control sample in terms of the resistance to moisture damage, permanent deformation and cracking.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Dudásová, Dorota; Rune Flåten, Geir; Sjöblom, Johan; Øye, Gisle
2009-09-15
The transmission profiles of one- to three-component particle suspension mixtures were analyzed by multivariate methods such as principal component analysis (PCA) and partial least-squares regression (PLS). The particles mimic the solids present in oil-field-produced water. Kaolin and silica represent solids of reservoir origin, whereas FeS is the product of bacterial metabolic activities, and Fe(3)O(4) corrosion product (e.g., from pipelines). All particles were coated with crude oil surface active components to imitate particles in real systems. The effects of different variables (concentration, temperature, and coating) on the suspension stability were studied with Turbiscan LAb(Expert). The transmission profiles over 75 min represent the overall water quality, while the transmission during the first 15.5 min gives information for suspension behavior during a representative time period for the hold time in the separator. The behavior of the mixed particle suspensions was compared to that of the single particle suspensions and models describing the systems were built. The findings are summarized as follows: silica seems to dominate the mixture properties in the binary suspensions toward enhanced separation. For 75 min, temperature and concentration are the most significant, while for 15.5 min, concentration is the only significant variable. Models for prediction of transmission spectra from run parameters as well as particle type from transmission profiles (inverse calibration) give a reasonable description of the relationships. In ternary particle mixtures, silica is not dominant and for 75 min, the significant variables for mixture (temperature and coating) are more similar to single kaolin and FeS/Fe(3)O(4). On the other hand, for 15.5 min, the coating is the most significant and this is similar to one for silica (at 15.5 min). The model for prediction of transmission spectra from run parameters gives good estimates of the transmission profiles. Although the model for prediction of particle type from transmission parameters is able to predict some particles, further improvement is required before all particles are consistently correctly classified. Cross-validation was done for both models and estimation errors are reported.
Roopwani, Rahul; Buckner, Ira S
2011-10-14
Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.
Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie
2007-11-01
Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
NASA Astrophysics Data System (ADS)
Miftahurrohmah, Brina; Iriawan, Nur; Fithriasari, Kartika
2017-06-01
Stocks are known as the financial instruments traded in the capital market which have a high level of risk. Their risks are indicated by their uncertainty of their return which have to be accepted by investors in the future. The higher the risk to be faced, the higher the return would be gained. Therefore, the measurements need to be made against the risk. Value at Risk (VaR) as the most popular risk measurement method, is frequently ignore when the pattern of return is not uni-modal Normal. The calculation of the risks using VaR method with the Normal Mixture Autoregressive (MNAR) approach has been considered. This paper proposes VaR method couple with the Mixture Laplace Autoregressive (MLAR) that would be implemented for analysing the first three biggest capitalization Islamic stock return in JII, namely PT. Astra International Tbk (ASII), PT. Telekomunikasi Indonesia Tbk (TLMK), and PT. Unilever Indonesia Tbk (UNVR). Parameter estimation is performed by employing Bayesian Markov Chain Monte Carlo (MCMC) approaches.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
NASA Astrophysics Data System (ADS)
Hadad, Ghada M.; El-Gindy, Alaa; Mahmoud, Waleed M. M.
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C 18 analytical column with a mobile phase consisting of a mixture of 20 mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ( 1DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
Hadad, Ghada M; El-Gindy, Alaa; Mahmoud, Waleed M M
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C(18) analytical column with a mobile phase consisting of a mixture of 20mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ((1)DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Rijal, Omar M; Abdullah, Norli A; Isa, Zakiah M; Noor, Norliza M; Tawfiq, Omar F
2013-01-01
The knowledge of teeth positions on the maxillary arch is useful in the rehabilitation of the edentulous patient. A combination of angular (θ), and linear (l) variables representing position of four teeth were initially proposed as the shape descriptor of the maxillary dental arch. Three categories of shape were established, each having a multivariate normal distribution. It may be argued that 4 selected teeth on the standardized digital images of the dental casts could be considered as insufficient with respect to representing shape. However, increasing the number of points would create problems with dimensions and proof of existence of the multivariate normal distribution is extremely difficult. This study investigates the ability of Fourier descriptors (FD) using all maxillary teeth to find alternative shape models. Eight FD terms were sufficient to represent 21 points on the arch. Using these 8 FD terms as an alternative shape descriptor, three categories of shape were verified, each category having the complex normal distribution.
Some Integrated Squared Error Procedures for Multivariate Normal Data,
1986-01-01
a lnear regresmion or experimental design model). Our procedures have &lSO been usned wcelyOn non -linear models but we do not addres nan-lnear...of fit, outliers, influence functions, experimental design , cluster analysis, robustness 24L A =TO ACT (VCefme - pvre alli of magsy MW identif by...structured data such as multivariate experimental designs . Several illustrations are provided. * 0 %41 %-. 4.’. * " , -.--, ,. -,, ., -, ’v ’ , " ,,- ,, . -,-. . ., * . - tAma- t
A Note on Asymptotic Joint Distribution of the Eigenvalues of a Noncentral Multivariate F Matrix.
1984-11-01
Krishnaiah (1982). Now, let us consider the samples drawn from the k multivariate normal popuiejons. Let (Xlt....Xpt) denote the mean vector of the t...to maltivariate problems. Sankh-ya, 4, 381-39(s. (71 KRISHNAIAH , P. R. (1982). Selection of variables in discrimlnant analysis. In Handbook of...Statistics, Volume 2 (P. R. Krishnaiah , editor), 805-820. North-Holland Publishing Company. 6. Unclassifie INSTRUCTIONS REPORT DOCUMENTATION PAGE
NASA Astrophysics Data System (ADS)
Harris, C. D.; Profeta, Luisa T. M.; Akpovo, Codjo A.; Johnson, Lewis; Stowe, Ashley C.
2017-05-01
A calibration model was created to illustrate the detection capabilities of laser ablation molecular isotopic spectroscopy (LAMIS) discrimination in isotopic analysis. The sample set contained boric acid pellets that varied in isotopic concentrations of 10B and 11B. Each sample set was interrogated with a Q-switched Nd:YAG ablation laser operating at 532 nm. A minimum of four band heads of the β system B2∑ -> Χ2∑transitions were identified and verified with previous literature on BO molecular emission lines. Isotopic shifts were observed in the spectra for each transition and used as the predictors in the calibration model. The spectra along with their respective 10/11B isotopic ratios were analyzed using Partial Least Squares Regression (PLSR). An IUPAC novel approach for determining a multivariate Limit of Detection (LOD) interval was used to predict the detection of the desired isotopic ratios. The predicted multivariate LOD is dependent on the variation of the instrumental signal and other composites in the calibration model space.
González, A; Norambuena-Contreras, J; Storey, L; Schlangen, E
2018-05-15
The concept of self-healing asphalt mixtures by bitumen temperature increase has been used by researchers to create an asphalt mixture with crack-healing properties by microwave or induction heating. Metals, normally steel wool fibers (SWF), are added to asphalt mixtures prepared with virgin materials to absorb and conduct thermal energy. Metal shavings, a waste material from the metal industry, could be used to replace SWF. In addition, reclaimed asphalt pavement (RAP) could be added to these mixtures to make a more sustainable road material. This research aimed to evaluate the effect of adding metal shavings and RAP on the properties of asphalt mixtures with crack-healing capabilities by microwave heating. The research indicates that metal shavings have an irregular shape with widths larger than typical SWF used with asphalt self-healing purposes. The general effect of adding metal shavings was an improvement in the crack-healing of asphalt mixtures, while adding RAP to mixtures with metal shavings reduced the healing. The average surface temperature of the asphalt samples after microwave heating was higher than temperatures obtained by induction heating, indicating that shavings are more efficient when mixtures are heated by microwave radiation. CT scan analysis showed that shavings uniformly distribute in the mixture, and the addition of metal shavings increases the air voids. Overall, it is concluded that asphalt mixtures with RAP and waste metal shavings have the potential of being crack-healed by microwave heating. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hydrogen isotope separation utilizing bulk getters
Knize, R.J.; Cecchi, J.L.
1991-08-20
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen. 4 figures.
Hydrogen isotope separation utilizing bulk getters
Knize, Randall J.; Cecchi, Joseph L.
1991-01-01
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.
Hydrogen isotope separation utilizing bulk getters
Knize, Randall J.; Cecchi, Joseph L.
1990-01-01
Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.
Method for removing sulfur oxide from waste gases and recovering elemental sulfur
Moore, Raymond H.
1977-01-01
A continuous catalytic fused salt extraction process is described for removing sulfur oxides from gaseous streams. The gaseous stream is contacted with a molten potassium sulfate salt mixture having a dissolved catalyst to oxidize sulfur dioxide to sulfur trioxide and molten potassium normal sulfate to solvate the sulfur trioxide to remove the sulfur trioxide from the gaseous stream. A portion of the sulfur trioxide loaded salt mixture is then dissociated to produce sulfur trioxide gas and thereby regenerate potassium normal sulfate. The evolved sulfur trioxide is reacted with hydrogen sulfide as in a Claus reactor to produce elemental sulfur. The process may be advantageously used to clean waste stack gas from industrial plants, such as copper smelters, where a supply of hydrogen sulfide is readily available.
Off-line real-time FTIR analysis of a process step in imipenem production
NASA Astrophysics Data System (ADS)
Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.
1992-08-01
We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.
Quantitative analysis of NMR spectra with chemometrics
NASA Astrophysics Data System (ADS)
Winning, H.; Larsen, F. H.; Bro, R.; Engelsen, S. B.
2008-01-01
The number of applications of chemometrics to series of NMR spectra is rapidly increasing due to an emerging interest for quantitative NMR spectroscopy e.g. in the pharmaceutical and food industries. This paper gives an analysis of advantages and limitations of applying the two most common chemometric procedures, Principal Component Analysis (PCA) and Multivariate Curve Resolution (MCR), to a designed set of 231 simple alcohol mixture (propanol, butanol and pentanol) 1H 400 MHz spectra. The study clearly demonstrates that the major advantage of chemometrics is the visualisation of larger data structures which adds a new exploratory dimension to NMR research. While robustness and powerful data visualisation and exploration are the main qualities of the PCA method, the study demonstrates that the bilinear MCR method is an even more powerful method for resolving pure component NMR spectra from mixtures when certain conditions are met.
Wavelength and energy dependent absorption of unconventional fuel mixtures
NASA Astrophysics Data System (ADS)
Khan, N.; Saleem, Z.; Mirza, A. A.
2005-11-01
Economic considerations of laser induced ignition over the normal electrical ignition of direct injected Compressed Natural Gas (CNG) engines has motivated automobile industry to go for extensive research on basic characteristics of leaner unconventional fuel mixtures to evaluate practical possibility of switching over to the emerging technologies. This paper briefly reviews the ongoing research activities on minimum ignition energy and power requirements of natural gas fuels and reports results of present laser air/CNG mixture absorption coefficient study. This study was arranged to determine the thermo-optical characteristics of high air/fuel ratio mixtures using laser techniques. We measured the absorption coefficient using four lasers of multiple wavelengths over a wide range of temperatures and pressures. The absorption coefficient of mixture was found to vary significantly over change of mixture temperature and probe laser wavelengths. The absorption coefficients of air/CNG mixtures were measured using 20 watts CW/pulsed CO2 laser at 10.6μm, Pulsed Nd:Yag laser at 1.06μm, 532 nm (2nd harmonic) and 4 mW CW HeNe laser at 645 nm and 580 nm for temperatures varying from 290 to 1000K using optical transmission loss technique.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Universal portfolios generated by weakly stationary processes
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Pang, Sook Theng
2014-12-01
Recently, a universal portfolio generated by a set of independent Brownian motions where a finite number of past stock prices are weighted by the moments of the multivariate normal distribution is introduced and studied. The multivariate normal moments as polynomials in time consequently lead to a constant rebalanced portfolio depending on the drift coefficients of the Brownian motions. For a weakly stationary process, a different type of universal portfolio is proposed where the weights on the stock prices depend only on the time differences of the stock prices. An empirical study is conducted on the returns achieved by the universal portfolios generated by the Ornstein-Uhlenbeck process on selected stock-price data sets. Promising results are demonstrated for increasing the wealth of the investor by using the weakly-stationary-process-generated universal portfolios.
49 CFR 173.313 - UN Portable Tank Table for Liquefied Compressed Gases.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Normal 0.51 7.0 7.0 7.0 1012 Butylene 8.0 Allowed Normal 0.53 7.0 7.0 7.0 1017 Chlorine 19.0 Not § 178... tanks— Not Allowed § 178.276(e)(3) 0.78 1041 Ethylene oxide and carbon dioxide mixture with more than 9...(a) Allowed Normal See § 173.32(f) 1079 Sulphur dioxide 11.6 Not Allowed § 178.276(e)(3) 1.23 10.3 8...
49 CFR 173.313 - UN Portable Tank Table for Liquefied Compressed Gases.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Normal 0.51 7.0 7.0 7.0 1012 Butylene 8.0 Allowed Normal 0.53 7.0 7.0 7.0 1017 Chlorine 19.0 Not § 178... tanks— Not Allowed § 178.276(e)(3) 0.78 1041 Ethylene oxide and carbon dioxide mixture with more than 9...(a) Allowed Normal See § 173.32(f) 1079 Sulphur dioxide 11.6 Not Allowed § 178.276(e)(3) 1.23 10.3 8...
Sahin Ersoy, Gulcin; Altun Ensari, Tugba; Vatansever, Dogan; Emirdar, Volkan; Cevik, Ozge
2017-02-01
To determine the levels of WISP1 and betatrophin in normal weight and obese women with polycystic ovary syndrome (PCOS) and to assess their relationship with anti-Müllerian hormone (AMH) levels, atherogenic profile and metabolic parameters Methods: In this prospective cross-sectional study, the study group was composed of 49 normal weighed and 34 obese women with PCOS diagnosed based on the Rotterdam criteria; 36 normal weight and 26 obese age matched non-hyperandrogenemic women with regular menstrual cycle. Serum WISP1, betatrophin, homeostasis model assessment of insulin resistance (HOMA-IR) and AMH levels were evaluated. Univariate and multivariate analyses were performed between betatrophin, WISP1 levels and AMH levels, metabolic and atherogenic parameters. Serum WISP1 and betatrophin values were elevated in the PCOS group than in the control group. Moreover, serum WISP1 and betatrophin levels were higher in the obese PCOS subgroup than in normal weight and obese control subgroups. Multivariate analyses revealed that Body mass index, HOMA-IR, AMH independently and positively predicted WISP1 levels. Serum betatrophin level variability was explained by homocysteine, HOMA-IR and androstenedione levels. WISP1 and betatrophin may play a key role on the pathogenesis of PCOS.
NASA Astrophysics Data System (ADS)
Chen, Long; Wang, Yue; Liu, Nenrong; Lin, Duo; Weng, Cuncheng; Zhang, Jixue; Zhu, Lihuan; Chen, Weisheng; Chen, Rong; Feng, Shangyuan
2013-06-01
The diagnostic capability of using tissue intrinsic micro-Raman signals to obtain biochemical information from human esophageal tissue is presented in this paper. Near-infrared micro-Raman spectroscopy combined with multivariate analysis was applied for discrimination of esophageal cancer tissue from normal tissue samples. Micro-Raman spectroscopy measurements were performed on 54 esophageal cancer tissues and 55 normal tissues in the 400-1750 cm-1 range. The mean Raman spectra showed significant differences between the two groups. Tentative assignments of the Raman bands in the measured tissue spectra suggested some changes in protein structure, a decrease in the relative amount of lactose, and increases in the percentages of tryptophan, collagen and phenylalanine content in esophageal cancer tissue as compared to those of a normal subject. The diagnostic algorithms based on principal component analysis (PCA) and linear discriminate analysis (LDA) achieved a diagnostic sensitivity of 87.0% and specificity of 70.9% for separating cancer from normal esophageal tissue samples. The result demonstrated that near-infrared micro-Raman spectroscopy combined with PCA-LDA analysis could be an effective and sensitive tool for identification of esophageal cancer.
Odourant dominance in olfactory mixture processing: what makes a strong odourant?
Schubert, Marco; Sandoz, Jean-Christophe; Galizia, Giovanni; Giurfa, Martin
2015-01-01
The question of how animals process stimulus mixtures remains controversial as opposing views propose that mixtures are processed analytically, as the sum of their elements, or holistically, as unique entities different from their elements. Overshadowing is a widespread phenomenon that can help decide between these alternatives. In overshadowing, an individual trained with a binary mixture learns one element better at the expense of the other. Although element salience (learning success) has been suggested as a main explanation for overshadowing, the mechanisms underlying this phenomenon remain unclear. We studied olfactory overshadowing in honeybees to uncover the mechanisms underlying olfactory-mixture processing. We provide, to our knowledge, the most comprehensive dataset on overshadowing to date based on 90 experimental groups involving more than 2700 bees trained either with six odourants or with their resulting 15 binary mixtures. We found that bees process olfactory mixtures analytically and that salience alone cannot predict overshadowing. After normalizing learning success, we found that an unexpected feature, the generalization profile of an odourant, was determinant for overshadowing. Odourants that induced less generalization enhanced their distinctiveness and became dominant in the mixture. Our study thus uncovers features that determine odourant dominance within olfactory mixtures and allows the referring of this phenomenon to differences in neural activity both at the receptor and the central level in the insect nervous system. PMID:25652840
Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica
2016-04-19
The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.
Domain wall suppression in trapped mixtures of Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio
2012-08-01
The ground-state energy of a binary mixture of Bose-Einstein condensates can be estimated for large atomic samples by making use of suitably regularized Thomas-Fermi density profiles. By exploiting a variational method on the trial densities the energy can be computed by explicitly taking into account the normalization condition. This yields analytical results and provides the basis for further improvement of the approximation. As a case study, we consider a binary mixture of 87Rb atoms in two different hyperfine states in a double-well potential and discuss the energy crossing between density profiles with different numbers of domain walls, as the number of particles and the interspecies interaction vary.
NASA Astrophysics Data System (ADS)
Kharga, D.; Inotani, D.; Hanai, R.; Ohashi, Y.
2017-06-01
We theoretically investigate the normal state properties of a Bose-Fermi mixture with a strong attractive interaction between Fermi and Bose atoms. We extend the ordinary T-matrix approximation (TMA) with respect to Bose-Fermi pairing fluctuations, to include the Hugenholtz-Pines' relation for all Bose Green's functions appearing in TMA self-energy diagrams. This extension is shown to be essentially important to correctly describe the physical properties of the Bose-Fermi mixture, especially near the Bose-Einstein condensation instability. Using this improved TMA, we clarify how the formation of composite fermions affects Bose and Fermi single-particle excitation spectra, over the entire interaction strength.
Two-component mixture model: Application to palm oil and exchange rate
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-12-01
Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.
Nerpin, Elisabet; Risérus, Ulf; Ingelsson, Erik; Sundström, Johan; Jobs, Magnus; Larsson, Anders; Basu, Samar; Ärnlöv, Johan
2008-01-01
OBJECTIVE—To investigate the association between insulin sensitivity and glomerular filtration rate (GFR) in the community, with prespecified subgroup analyses in normoglycemic individuals with normal GFR. RESEARCH DESIGN AND METHODS—We investigated the cross-sectional association between insulin sensitivity (M/I, assessed using euglycemic clamp) and cystatin C–based GFR in a community-based cohort of elderly men (Uppsala Longitudinal Study of Adult Men [ULSAM], n = 1,070). We also investigated whether insulin sensitivity predicted the incidence of renal dysfunction at a follow-up examination after 7 years. RESULTS—Insulin sensitivity was directly related to GFR (multivariable-adjusted regression coefficient for 1-unit higher M/I 1.19 [95% CI 0.69–1.68]; P < 0.001) after adjusting for age, glucometabolic variables (fasting plasma glucose, fasting plasma insulin, and 2-h glucose after an oral glucose tolerance test), cardiovascular risk factors (hypertension, dyslipidemia, and smoking), and lifestyle factors (BMI, physical activity, and consumption of tea, coffee, and alcohol). The positive multivariable-adjusted association between insulin sensitivity and GFR also remained statistically significant in participants with normal fasting plasma glucose, normal glucose tolerance, and normal GFR (n = 443; P < 0.02). In longitudinal analyses, higher insulin sensitivity at baseline was associated with lower risk of impaired renal function (GFR <50 ml/min per 1.73 m2) during follow-up independently of glucometabolic variables (multivariable-adjusted odds ratio for 1-unit higher of M/I 0.58 [95% CI 0.40–0.84]; P < 0.004). CONCLUSIONS—Our data suggest that impaired insulin sensitivity may be involved in the development of renal dysfunction at an early stage, before the onset of diabetes or prediabetic glucose elevations. Further studies are needed in order to establish causality. PMID:18509205
Prabitha, Vasumathi Gopala; Suchetha, Sambasivan; Jayanthi, Jayaraj Lalitha; Baiju, Kamalasanan Vijayakumary; Rema, Prabhakaran; Anuraj, Koyippurath; Mathews, Anita; Sebastian, Paul; Subhash, Narayanan
2016-01-01
Diffuse reflectance (DR) spectroscopy is a non-invasive, real-time, and cost-effective tool for early detection of malignant changes in squamous epithelial tissues. The present study aims to evaluate the diagnostic power of diffuse reflectance spectroscopy for non-invasive discrimination of cervical lesions in vivo. A clinical trial was carried out on 48 sites in 34 patients by recording DR spectra using a point-monitoring device with white light illumination. The acquired data were analyzed and classified using multivariate statistical analysis based on principal component analysis (PCA) and linear discriminant analysis (LDA). Diagnostic accuracies were validated using random number generators. The receiver operating characteristic (ROC) curves were plotted for evaluating the discriminating power of the proposed statistical technique. An algorithm was developed and used to classify non-diseased (normal) from diseased sites (abnormal) with a sensitivity of 72 % and specificity of 87 %. While low-grade squamous intraepithelial lesion (LSIL) could be discriminated from normal with a sensitivity of 56 % and specificity of 80 %, and high-grade squamous intraepithelial lesion (HSIL) from normal with a sensitivity of 89 % and specificity of 97 %, LSIL could be discriminated from HSIL with 100 % sensitivity and specificity. The areas under the ROC curves were 0.993 (95 % confidence interval (CI) 0.0 to 1) and 1 (95 % CI 1) for the discrimination of HSIL from normal and HSIL from LSIL, respectively. The results of the study show that DR spectroscopy could be used along with multivariate analytical techniques as a non-invasive technique to monitor cervical disease status in real time.
Fink, Stephen P.; Yamauchi, Mai; Nishihara, Reiko; Jung, Seungyoun; Kuchiba, Aya; Wu, Kana; Cho, Eunyoung; Giovannucci, Edward; Fuchs, Charles S.; Ogino, Shuji; Markowitz, Sanford D.; Chan, Andrew T.
2014-01-01
Aspirin use reduces the risk of colorectal neoplasia, at least in part, through inhibition of prostaglandin-endoperoxide synthase 2 (PTGS2, cyclooxygenase 2)-related pathways. Hydroxyprostaglandin dehydrogenase 15-(NAD) (15-PGDH, HPGD) is downregulated in colorectal cancers and functions as a metabolic antagonist of PTGS2. We hypothesized that the effect of aspirin may be antagonized by low 15-PGDH expression in the normal colon. In the Nurses’ Health Study and the Health Professionals Follow-up Study, we collected data on aspirin use and other risk factors every two years and followed up participants for diagnoses of colorectal cancer. Duplication-method Cox proportional, multivariable-adjusted, cause-specific hazards regression for competing risks data was used to compute hazard ratios (HRs) for incident colorectal cancer according to 15-PGDH mRNA expression level measured in normal mucosa from colorectal cancer resections. Among 127,865 participants, we documented 270 colorectal cancer cases that developed during 3,166,880 person-years of follow-up and from which we could assess 15-PGDH expression. Compared with nonuse, regular aspirin use was associated with lower risk of colorectal cancer that developed within a background of colonic mucosa with high 15-PGDH expression (multivariable HR=0.49; 95% CI, 0.34–0.71), but not with low 15-PGDH expression (multivariable HR=0.90; 95% CI, 0.63–1.27) (P for heterogeneity=0.018). Regular aspirin use was associated with lower incidence of colorectal cancers arising in association with high 15-PGDH expression, but not with low 15-PGDH expression in normal colon mucosa. This suggests that 15-PGDH expression level in normal colon mucosa may serve as a biomarker which may predict stronger benefit from aspirin chemoprevention. PMID:24760190
Meehan, Cheryl L.; Hogan, Jennifer N.; Morfeld, Kari A.; Carlstead, Kathy
2016-01-01
As part of a multi-institutional study of zoo elephant welfare, we evaluated female elephants managed by zoos accredited by the Association of Zoos and Aquariums and applied epidemiological methods to determine what factors in the zoo environment are associated with reproductive problems, including ovarian acyclicity and hyperprolactinemia. Bi-weekly blood samples were collected from 95 African (Loxodonta africana) and 75 Asian (Elephas maximus) (8–55 years of age) elephants over a 12-month period for analysis of serum progestogens and prolactin. Females were categorized as normal cycling (regular 13- to 17-week cycles), irregular cycling (cycles longer or shorter than normal) or acyclic (baseline progestogens, <0.1 ng/ml throughout), and having Low/Normal (<14 or 18 ng/ml) or High (≥14 or 18 ng/ml) prolactin for Asian and African elephants, respectively. Rates of normal cycling, acyclicity and irregular cycling were 73.2, 22.5 and 4.2% for Asian, and 48.4, 37.9 and 13.7% for African elephants, respectively, all of which differed between species (P < 0.05). For African elephants, univariate assessment found that social isolation decreased and higher enrichment diversity increased the chance a female would cycle normally. The strongest multi-variable models included Age (positive) and Enrichment Diversity (negative) as important factors of acyclicity among African elephants. The Asian elephant data set was not robust enough to support multi-variable analyses of cyclicity status. Additionally, only 3% of Asian elephants were found to be hyperprolactinemic as compared to 28% of Africans, so predictive analyses of prolactin status were conducted on African elephants only. The strongest multi-variable model included Age (positive), Enrichment Diversity (negative), Alternate Feeding Methods (negative) and Social Group Contact (positive) as predictors of hyperprolactinemia. In summary, the incidence of ovarian cycle problems and hyperprolactinemia predominantly affects African elephants, and increases in social stability and feeding and enrichment diversity may have positive influences on hormone status. PMID:27416141
Brown, Janine L; Paris, Stephen; Prado-Oviedo, Natalia A; Meehan, Cheryl L; Hogan, Jennifer N; Morfeld, Kari A; Carlstead, Kathy
2016-01-01
As part of a multi-institutional study of zoo elephant welfare, we evaluated female elephants managed by zoos accredited by the Association of Zoos and Aquariums and applied epidemiological methods to determine what factors in the zoo environment are associated with reproductive problems, including ovarian acyclicity and hyperprolactinemia. Bi-weekly blood samples were collected from 95 African (Loxodonta africana) and 75 Asian (Elephas maximus) (8-55 years of age) elephants over a 12-month period for analysis of serum progestogens and prolactin. Females were categorized as normal cycling (regular 13- to 17-week cycles), irregular cycling (cycles longer or shorter than normal) or acyclic (baseline progestogens, <0.1 ng/ml throughout), and having Low/Normal (<14 or 18 ng/ml) or High (≥14 or 18 ng/ml) prolactin for Asian and African elephants, respectively. Rates of normal cycling, acyclicity and irregular cycling were 73.2, 22.5 and 4.2% for Asian, and 48.4, 37.9 and 13.7% for African elephants, respectively, all of which differed between species (P < 0.05). For African elephants, univariate assessment found that social isolation decreased and higher enrichment diversity increased the chance a female would cycle normally. The strongest multi-variable models included Age (positive) and Enrichment Diversity (negative) as important factors of acyclicity among African elephants. The Asian elephant data set was not robust enough to support multi-variable analyses of cyclicity status. Additionally, only 3% of Asian elephants were found to be hyperprolactinemic as compared to 28% of Africans, so predictive analyses of prolactin status were conducted on African elephants only. The strongest multi-variable model included Age (positive), Enrichment Diversity (negative), Alternate Feeding Methods (negative) and Social Group Contact (positive) as predictors of hyperprolactinemia. In summary, the incidence of ovarian cycle problems and hyperprolactinemia predominantly affects African elephants, and increases in social stability and feeding and enrichment diversity may have positive influences on hormone status.
Optical assay for biotechnology and clinical diagnosis.
Moczko, Ewa; Cauchi, Michael; Turner, Claire; Meglinski, Igor; Piletsky, Sergey
2011-08-01
In this paper, we present an optical diagnostic assay consisting of a mixture of environmental-sensitive fluorescent dyes combined with multivariate data analysis for quantitative and qualitative examination of biological and clinical samples. The performance of the assay is based on the analysis of spectrum of the selected fluorescent dyes with the operational principle similar to electronic nose and electronic tongue systems. This approach has been successfully applied for monitoring of growing cell cultures and identification of gastrointestinal diseases in humans.
Multivariate analysis for scanning tunneling spectroscopy data
NASA Astrophysics Data System (ADS)
Yamanishi, Junsuke; Iwase, Shigeru; Ishida, Nobuyuki; Fujita, Daisuke
2018-01-01
We applied principal component analysis (PCA) to two-dimensional tunneling spectroscopy (2DTS) data obtained on a Si(111)-(7 × 7) surface to explore the effectiveness of multivariate analysis for interpreting 2DTS data. We demonstrated that several components that originated mainly from specific atoms at the Si(111)-(7 × 7) surface can be extracted by PCA. Furthermore, we showed that hidden components in the tunneling spectra can be decomposed (peak separation), which is difficult to achieve with normal 2DTS analysis without the support of theoretical calculations. Our analysis showed that multivariate analysis can be an additional powerful way to analyze 2DTS data and extract hidden information from a large amount of spectroscopic data.
1988-05-01
represented name Emitted Organics Included in All Models CO Carbon Monoxide C:C, Ethene HCHO Formaldehyde CCHO Acetaldehyde RCHO Propionaldehyde and other...of species in the mixture, and for proper use of this program, these files should be "normalized," i.e., the number of carbons in the mixture should...scenario in memory. Valid parmtypes are SCEN, PHYS, CHEM, VP, NSP, OUTP, SCHEDS. LIST ALLCOMP Lists all available composition filenames. LIST ALLSCE
ERIC Educational Resources Information Center
Solan, Harold A.
1987-01-01
This study involving 38 normally achieving fourth and fifth grade children confirmed previous studies indicating that both spatial-simultaneous (in which perceived stimuli are totally available at one point in time) and verbal-successive (information is presented in serial order) cognitive processing are important in normal learning. (DB)
Is the ML Chi-Square Ever Robust to Nonnormality? A Cautionary Note with Missing Data
ERIC Educational Resources Information Center
Savalei, Victoria
2008-01-01
Normal theory maximum likelihood (ML) is by far the most popular estimation and testing method used in structural equation modeling (SEM), and it is the default in most SEM programs. Even though this approach assumes multivariate normality of the data, its use can be justified on the grounds that it is fairly robust to the violations of the…
Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.
Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M
2017-02-02
Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.
Metillo, Ephrime B; Ritz, David A
2003-02-01
Three mysid species showed differences in chemosensory feeding as judged from stereotyped food capturing responses to dissolved mixtures of feeding stimulant (either betaine-HCl or glycine) and suppressant (ammonium). The strongest responses were to 50:50 mixtures of both betaine-ammonium and glycine-ammonium solutions. In general, the response curve to the different mixtures tested was bell-shaped. Anisomysis mixta australis only showed the normal curve in response to the glycine-ammonium mixture. The platykurtic curve for Tenagomysis tasmaniae suggests a less optimal response to the betaine-HCl-ammonium solution. Paramesopodopsis rufa reacted more strongly to the betaine-ammonium than to the glycine-ammonium solutions, and more individuals of this species responded to both solutions than the other two species. It is suggested that these contrasting chemosensitivities of the three coexisting mysid species serve as a means of partitioning the feeding niche.
Microwave Determination of Water Mole Fraction in Humid Gas Mixtures
NASA Astrophysics Data System (ADS)
Cuccaro, R.; Gavioso, R. M.; Benedetto, G.; Madonna Ripa, D.; Fernicola, V.; Guianvarc'h, C.
2012-09-01
A small volume (65 cm3) gold-plated quasi-spherical microwave resonator has been used to measure the water vapor mole fraction x w of H2O/N2 and H2O/air mixtures. This experimental technique exploits the high precision achievable in the determination of the cavity microwave resonance frequencies and is particularly sensitive to the presence of small concentrations of water vapor as a result of the high polarizability of this substance. The mixtures were prepared using the INRIM standard humidity generator for frost-point temperatures T fp in the range between 241 K and 270 K and a commercial two-pressure humidity generator operated at a dew-point temperature between 272 K and 291 K. The experimental measurements compare favorably with the calculated molar fractions of the mixture supplied by the humidity generators, showing a normalized error lower than 0.8.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Brandis, Aaron; McIntyre, Timothy J.
2011-01-01
Thermochemical relaxation behind a normal shock in Mars and Titan gas mixtures is simulated using a CFD solver, DPLR, for a hemisphere of 1 m radius; the thermochemical relaxation along the stagnation streamline is considered equivalent to the flow behind a normal shock. Flow simulations are performed for a Titan gas mixture (98% N2, 2% CH4 by volume) for shock speeds of 5.7 and 7.6 km/s and pressures ranging from 20 to 1000 Pa, and a Mars gas mixture (96% CO2, and 4% N2 by volume) for a shock speed of 8.6 km/s and freestream pressure of 13 Pa. For each case, the temperatures and number densities of chemical species obtained from the CFD flow predictions are used as an input to a line-by-line radiation code, NEQAIR. The NEQAIR code is then used to compute the spatial distribution of volumetric radiance starting from the shock front to the point where thermochemical equilibrium is nominally established. Computations of volumetric spectral radiance assume Boltzmann distributions over radiatively linked electronic states of atoms and molecules. The results of these simulations are compared against experimental data acquired in the X2 facility at the University of Queensland, Australia. The experimental measurements were taken over a spectral range of 310-450 nm where the dominant contributor to radiation is the CN violet band system. In almost all cases, the present approach of computing the spatial variation of post-shock volumetric radiance by applying NEQAIR along a stagnation line computed using a high-fidelity flow solver with good spatial resolution of the relaxation zone is shown to replicate trends in measured relaxation of radiance for both Mars and Titan gas mixtures.
de Oliveira, Rodrigo Rocha; de Lima, Kássio Michell Gomes; Tauler, Romà; de Juan, Anna
2014-07-01
This study describes two applications of a variant of the multivariate curve resolution alternating least squares (MCR-ALS) method with a correlation constraint. The first application describes the use of MCR-ALS for the determination of biodiesel concentrations in biodiesel blends using near infrared (NIR) spectroscopic data. In the second application, the proposed method allowed the determination of the synthetic antioxidant N,N'-Di-sec-butyl-p-phenylenediamine (PDA) present in biodiesel mixtures from different vegetable sources using UV-visible spectroscopy. Well established multivariate regression algorithm, partial least squares (PLS), were calculated for comparison of the quantification performance in the models developed in both applications. The correlation constraint has been adapted to handle the presence of batch-to-batch matrix effects due to ageing effects, which might occur when different groups of samples were used to build a calibration model in the first application. Different data set configurations and diverse modes of application of the correlation constraint are explored and guidelines are given to cope with different type of analytical problems, such as the correction of matrix effects among biodiesel samples, where MCR-ALS outperformed PLS reducing the relative error of prediction RE (%) from 9.82% to 4.85% in the first application, or the determination of minor compound with overlapped weak spectroscopic signals, where MCR-ALS gave higher (RE (%)=3.16%) for prediction of PDA compared to PLS (RE (%)=1.99%), but with the advantage of recovering the related pure spectral profile of analytes and interferences. The obtained results show the potential of the MCR-ALS method with correlation constraint to be adapted to diverse data set configurations and analytical problems related to the determination of biodiesel mixtures and added compounds therein. Copyright © 2014 Elsevier B.V. All rights reserved.
Drug Stability Analysis by Raman Spectroscopy
Shende, Chetan; Smith, Wayne; Brouillette, Carl; Farquharson, Stuart
2014-01-01
Pharmaceutical drugs are available to astronauts to help them overcome the deleterious effects of weightlessness, sickness and injuries. Unfortunately, recent studies have shown that some of the drugs currently used may degrade more rapidly in space, losing their potency before their expiration dates. To complicate matters, the degradation products of some drugs can be toxic. Here, we present a preliminary investigation of the ability of Raman spectroscopy to quantify mixtures of four drugs; acetaminophen, azithromycin, epinephrine, and lidocaine, with their primary degradation products. The Raman spectra for the mixtures were replicated by adding the pure spectra of the drug and its degradant to determine the relative percent contributions using classical least squares. This multivariate approach allowed determining concentrations in ~10 min with a limit of detection of ~4% of the degradant. These results suggest that a Raman analyzer could be used to assess drug potency, nondestructively, at the time of use to ensure crewmember safety. PMID:25533308
Baldovin-Stella stochastic volatility process and Wiener process mixtures
NASA Astrophysics Data System (ADS)
Peirano, P. P.; Challet, D.
2012-08-01
Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.
Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita
2018-03-01
The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.
Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald
2017-01-01
In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587
[Effect of different nutritional support on pancreatic secretion in acute pancreatitis].
Achkasov, E E; Pugaev, A V; Nabiyeva, Zh G; Kalachev, S V
To develop and justify optimal nutritional support in early phase of acute pancreatitis (AP). 140 AP patients were enrolled. They were divided into groups depending on nutritional support: group I (n=70) - early enteral tube feeding (ETF) with balanced mixtures, group II (n=30) - early ETF with oligopeptide mixture, group III (n=40) - total parenteral nutrition (TPN). The subgroups were also isolated depending on medication: A - Octreotide, B - Quamatel, C - Octreotide + Quamatel. Pancreatic secretion was evaluated by using of course of disease, instrumental methods, APUD-system hormone levels (secretin, cholecystokinin, somatostatin, vasointestinal peptide). ETF was followed by pancreas enlargement despite ongoing therapy, while TPN led to gradual reduction of pancreatic size up to normal values. α-amylase level progressively decreased in all groups, however in patients who underwent ETF (I and II) mean values of the enzyme were significantly higher compared with TPN (group III). Secretin, cholecystokinin and vasointestinal peptide were increasing in most cases, while the level of somatostatin was below normal in all groups. Enteral tube feeding (balanced and oligopeptide mixtures) contributes to pancreatic secretion compared with TPN, but this negative impact is eliminated by antisecretory therapy. Dual medication (Octreotide + Quamatel) is more preferable than monotherapy (Octreotide or Quamatel).
Bayesian Local Contamination Models for Multivariate Outliers
Page, Garritt L.; Dunson, David B.
2013-01-01
In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Veena D.; Daharwal, Sanjay J.
2017-01-01
Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.
Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic
2017-02-01
Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, Fei; Ye, Lanhan; Peng, Jiyu; Song, Kunlin; Shen, Tingting; Zhang, Chu; He, Yong
2018-02-27
Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R 2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where R c 2 and R p 2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice.
Ye, Lanhan; Song, Kunlin; Shen, Tingting
2018-01-01
Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where Rc2 and Rp2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice. PMID:29495445
Concrete pavement mixture design and analysis (MDA) : factors influencing drying shrinkage.
DOT National Transportation Integrated Search
2014-10-01
This literature review focuses on factors influencing drying shrinkage of concrete. Although the factors are normally interrelated, they : can be categorized into three groups: paste quantity, paste quality, and other factors.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
Kang, Yanlei; Shao, Zhanying; Wang, Qiang; Hu, Xiurong; Yu, Dongdong
2018-05-26
Entecavir was used for the treatment of chronic hepatitis B through inhibiting hepatitis B virus. The anhydrous form of entecavir (ENT-A) often appeared as impurity polymorph in the manufacturing process of entecavir monohydrate (ENT-H) such as granulation, drying and compression. Since different crystal forms might affect drug bioavailability and therapeutic effect, it was vital to control the ENT-A content of the drug product. The work aimed to develop useful methods to assess ENT-A weight percentage in ENT-H. Powder X-ray diffractometry (PXRD) and Raman spectrometric methods were applied. Binary mixtures with different ratios of pure ENT-H and pure ENT-A were scanned using PXRD and Raman to obtain spectra. Then peak heights and peak areas versus weight percentage were used to construct calibration curves. The best linear regression analysis data for PXRD and Raman method were found to be R 2 = 0.9923 and R 2 = 0.9953, in the weight ratio range of 2.1-20.2% w/w% of ENT-A in binary mixtures. Limit of detection (LOD) of ENT-A was 0.38% and limit of quantitation (LOQ) was 1.15% for PXRD method. LOD and LOQ for Raman method were 0.48% and 1.16%. The results showed that PXRD and Raman methods: both were precise and accurate, and could be used for measurement of ENT-A content in the selected weight percentage range. Partial least squares (PLS) algorithm with four data pre-processing methods: including multiplicative scatter correlation (MSC), standard normal variate (SNV), first and second derivatives were applied and evaluated using prediction errors. The best performance of PLS was R 2 = 0.9958 with RMSEC (0.44%) and RMSEP (0.65%). Multivariate analysis for Raman spectra showed similar good results with univariate analysis, and would be an advantageous method when there were overlapped peaks in the spectra. In summary, the proposed PXRD and Raman method could be developed for the quality control of ENT-H. And Raman was a more promising method in industrial practice due to its slightly better precision, accuracy and time-saving advantage. Copyright © 2018 Elsevier B.V. All rights reserved.
Ashraf-Khorassani, M; Yan, Q; Akin, A; Riley, F; Aurigemma, C; Taylor, L T
2015-10-30
Method development for normal phase flash liquid chromatography traditionally employs preliminary screening using thin layer chromatography (TLC) with conventional solvents on bare silica. Extension to green flash chromatography via correlation of TLC migration results, with conventional polar/nonpolar liquid mixtures, and packed column supercritical fluid chromatography (SFC) retention times, via gradient elution on bare silica with a suite of carbon dioxide mobile phase modifiers, is reported. Feasibility of TLC/SFC correlation is individually described for eight ternary mixtures for a total of 24 neutral analytes. The experimental criteria for TLC/SFC correlation was assumed to be as follows: SFC/UV/MS retention (tR) increases among each of the three resolved mixture components; while, TLC migration (Rf) decreases among the same resolved mixture components. Successful correlation of TLC to SFC was observed for most of the polar organic solvents tested, with the best results observed via SFC on bare silica with methanol as the CO2 modifier and TLC on bare silica with a methanol/dichloromethane mixture. Copyright © 2015 Elsevier B.V. All rights reserved.
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-12-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-01-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894
Antihyperlipidemic Effect of a Polyherbal Mixture in Streptozotocin-Induced Diabetic Rats
Shafiee-Nick, Reza; Rakhshandeh, Hassan; Borji, Abasalt
2013-01-01
The effects of a polyherbal mixture containing Allium sativum, Cinnamomum zeylanicum, Citrullus colocynthis, Juglans regia, Nigella sativa, Olea europaea, Punica granatum, Salvia officinalis, Teucrium polium, Trigonella foenum, Urtica dioica, and Vaccinium arctostaphylos were tested on biochemical parameters in diabetic rats. The animals were randomized into three groups: (1) normal control, (2) diabetic control, and (3) diabetic rats which received diet containing 15% (w/w) of this mixture for 4 weeks. Diabetes was induced by intraperitoneal injection of streptozotocin (55 mg/kg). At the end of experiment, the mixture had no significant effect on serum hepatic enzymes, aspartate aminotransferase, and alanine aminotransferase activities. However, the level of fasting blood glucose, water intake, and urine output in treated group was lower than that in diabetic control rats (P < 0.01). Also, the levels of triglyceride and total cholesterol in polyherbal mixture treated rats were significantly lower than those in diabetic control group (P < 0.05). Our results demonstrated that this polyherbal mixture has beneficial effects on blood glucose and lipid profile and it has the potential to be used as a dietary supplement for the management of diabetes. PMID:24383002
High/variable mixture ratio O2/H2 engine
NASA Technical Reports Server (NTRS)
Adams, A.; Parsley, R. C.
1988-01-01
Vehicle/engine analysis studies have identified the High/Dual Mixture Ratio O2/H2 Engine cycle as a leading candidate for an advanced Single Stage to Orbit (SSTO) propulsion system. This cycle is designed to allow operation at a higher than normal O/F ratio of 12 during liftoff and then transition to a more optimum O/F ratio of 6 at altitude. While operation at high mixture ratios lowers specific impulse, the resultant high propellant bulk density and high power density combine to minimize the influence of atmospheric drag and low altitude gravitational forces. Transition to a lower mixture ratio at altitude then provides improved specific impulse relative to a single mixture ratio engine that must select a mixture ratio that is balanced for both low and high altitude operation. This combination of increased altitude specific impulse and high propellant bulk density more than offsets the compromised low altitude performance and results in an overall mission benefit. Two areas of technical concern relative to the execution of this dual mixture ratio cycle concept are addressed. First, actions required to transition from high to low mixture ratio are examined, including an assessment of the main chamber environment as the main chamber mixture ratio passes through stoichiometric. Secondly, two approaches to meet a requirement for high turbine power at high mixture ratio condition are examined. One approach uses high turbine temperature to produce the power and requires cooled turbines. The other approach incorporates an oxidizer-rich preburner to increase turbine work capability via increased turbine mass flow.
Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai
2017-10-01
Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.
Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric
2007-01-01
A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926
Esteve-Turrillas, F A; Armenta, S; Garrigues, S; Pastor, A; de la Guardia, M
2007-03-21
A simple and fast method has been developed for the determination of benzene, toluene and the mixture of ethylbenzene and xylene isomers (BTEX) in soils. Samples were introduced in 10 mL standard glass vials of a headspace (HS) autosampler together with 150 microL of 2,6,10,14-tetramethylpentadecane, heated at 90 degrees C for 10 min and introduced in the mass spectrometer by using a transfer line heated at 250 degrees C as interface. The volatile fraction of samples was directly introduced into the source of the mass spectrometer which was scanned from m/z 75 to 110. A partial least squares (PLS) multivariate calibration approach based on a classical 3(3) calibration model was build with mixtures of benzene, toluene and o-xylene in 2,6,10,14-tetramethylpentadecane for BTEX determination. Results obtained for BTEX analysis by HS-MS in different types of soil samples were comparables to those obtained by the reference HS-GC-MS procedure. So, the developed procedure allowed a fast identification and prediction of BTEX present in the samples without a prior chromatographic separation.
Godoy-Caballero, María del Pilar; Culzoni, María Julia; Galeano-Díaz, Teresa; Acedo-Valenzuela, María Isabel
2013-02-06
This paper presents the development of a non-aqueous capillary electrophoresis method coupled to UV detection combined with multivariate curve resolution-alternating least-squares (MCR-ALS) to carry out the resolution and quantitation of a mixture of six phenolic acids in virgin olive oil samples. p-Coumaric, caffeic, ferulic, 3,4-dihydroxyphenylacetic, vanillic and 4-hydroxyphenilacetic acids have been the analytes under study. All of them present different absorption spectra and overlapped time profiles with the olive oil matrix interferences and between them. The modeling strategy involves the building of a single MCR-ALS model composed of matrices augmented in the temporal mode, namely spectra remain invariant while time profiles may change from sample to sample. So MCR-ALS was used to cope with the coeluting interferences, on accounting the second order advantage inherent to this algorithm which, in addition, is able to handle data sets deviating from trilinearity, like the data herein analyzed. The method was firstly applied to resolve standard mixtures of the analytes randomly prepared in 1-propanol and, secondly, in real virgin olive oil samples, getting recovery values near to 100% in all cases. The importance and novelty of this methodology relies on the combination of non-aqueous capillary electrophoresis second-order data and MCR-ALS algorithm which allows performing the resolution of these compounds simplifying the previous sample pretreatment stages. Copyright © 2012 Elsevier B.V. All rights reserved.
Bayesian multivariate Poisson abundance models for T-cell receptor data.
Greene, Joshua; Birtwistle, Marc R; Ignatowicz, Leszek; Rempala, Grzegorz A
2013-06-07
A major feature of an adaptive immune system is its ability to generate B- and T-cell clones capable of recognizing and neutralizing specific antigens. These clones recognize antigens with the help of the surface molecules, called antigen receptors, acquired individually during the clonal development process. In order to ensure a response to a broad range of antigens, the number of different receptor molecules is extremely large, resulting in a huge clonal diversity of both B- and T-cell receptor populations and making their experimental comparisons statistically challenging. To facilitate such comparisons, we propose a flexible parametric model of multivariate count data and illustrate its use in a simultaneous analysis of multiple antigen receptor populations derived from mammalian T-cells. The model relies on a representation of the observed receptor counts as a multivariate Poisson abundance mixture (m PAM). A Bayesian parameter fitting procedure is proposed, based on the complete posterior likelihood, rather than the conditional one used typically in similar settings. The new procedure is shown to be considerably more efficient than its conditional counterpart (as measured by the Fisher information) in the regions of m PAM parameter space relevant to model T-cell data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ma, Chunhui; Dastmalchi, Keyvan; Flores, Gema; Wu, Shi-Biao; Pedraza-Peñalosa, Paola; Long, Chunlin; Kennelly, Edward J
2013-04-10
There are many neotropical blueberries, and recent studies have shown that some have even stronger antioxidant activity than the well-known edible North American blueberries. Antioxidant marker compounds were predicted by applying multivariate statistics to data from LC-TOF-MS analysis and antioxidant assays of 3 North American blueberry species (Vaccinium corymbosum, Vaccinium angustifolium, and a defined mixture of Vaccinium virgatum with V. corymbosum) and 12 neotropical blueberry species (Anthopterus wardii, Cavendishia grandifolia, Cavendishia isernii, Ceratostema silvicola, Disterigma rimbachii, Macleania coccoloboides, Macleania cordifolia, Macleania rupestris, Satyria boliviana, Sphyrospermum buxifolium, Sphyrospermum cordifolium, and Sphyrospermum ellipticum). Fourteen antioxidant markers were detected, and 12 of these, including 7 anthocyanins, 3 flavonols, 1 hydroxycinnamic acid, and 1 iridoid glycoside, were identified. This application of multivariate analysis to bioactivity and mass data can be used for identification of pharmacologically active natural products and may help to determine which neotropical blueberry species will be prioritized for agricultural development. Also, the compositional differences between North American and neotropical blueberries were determined by chemometric analysis, and 44 marker compounds including 16 anthocyanins, 15 flavonoids, 7 hydroxycinnamic acid derivatives, 5 triterpene glycosides, and 1 iridoid glycoside were identified.
Maggio, Rubén M; Damiani, Patricia C; Olivieri, Alejandro C
2011-01-30
Liquid chromatographic-diode array detection data recorded for aqueous mixtures of 11 pesticides show the combined presence of strongly coeluting peaks, distortions in the time dimension between experimental runs, and the presence of potential interferents not modeled by the calibration phase in certain test samples. Due to the complexity of these phenomena, data were processed by a second-order multivariate algorithm based on multivariate curve resolution and alternating least-squares, which allows one to successfully model both the spectral and retention time behavior for all sample constituents. This led to the accurate quantitation of all analytes in a set of validation samples: aldicarb sulfoxide, oxamyl, aldicarb sulfone, methomyl, 3-hydroxy-carbofuran, aldicarb, propoxur, carbofuran, carbaryl, 1-naphthol and methiocarb. Limits of detection in the range 0.1-2 μg mL(-1) were obtained. Additionally, the second-order advantage for several analytes was achieved in samples containing several uncalibrated interferences. The limits of detection for all analytes were decreased by solid phase pre-concentration to values compatible to those officially recommended, i.e., in the order of 5 ng mL(-1). Copyright © 2010 Elsevier B.V. All rights reserved.
Speciation of adsorbates on surface of solids by infrared spectroscopy and chemometrics.
Vilmin, Franck; Bazin, Philippe; Thibault-Starzyk, Frédéric; Travert, Arnaud
2015-09-03
Speciation, i.e. identification and quantification, of surface species on heterogeneous surfaces by infrared spectroscopy is important in many fields but remains a challenging task when facing strongly overlapped spectra of multiple adspecies. Here, we propose a new methodology, combining state of the art instrumental developments for quantitative infrared spectroscopy of adspecies and chemometrics tools, mainly a novel data processing algorithm, called SORB-MCR (SOft modeling by Recursive Based-Multivariate Curve Resolution) and multivariate calibration. After formal transposition of the general linear mixture model to adsorption spectral data, the main issues, i.e. validity of Beer-Lambert law and rank deficiency problems, are theoretically discussed. Then, the methodology is exposed through application to two case studies, each of them characterized by a specific type of rank deficiency: (i) speciation of physisorbed water species over a hydrated silica surface, and (ii) speciation (chemisorption and physisorption) of a silane probe molecule over a dehydrated silica surface. In both cases, we demonstrate the relevance of this approach which leads to a thorough surface speciation based on comprehensive and fully interpretable multivariate quantitative models. Limitations and drawbacks of the methodology are also underlined. Copyright © 2015 Elsevier B.V. All rights reserved.
Rivero, Javier; Henríquez-Hernández, Luis Alberto; Luzardo, Octavio P; Pestano, José; Zumbado, Manuel; Boada, Luis D; Valerón, Pilar F
2016-03-30
Organochlorine pesticides (OCs) have been associated with breast cancer development and progression, but the mechanisms underlying this phenomenon are not well known. In this work, we evaluated the effects exerted on normal human mammary epithelial cells (HMEC) by the OC mixtures most frequently detected in healthy women (H-mixture) and in women diagnosed with breast cancer (BC-mixture), as identified in a previous case-control study developed in Spain. Cytotoxicity and gene expression profile of human kinases (n=68) and non-kinases (n=26) were tested at concentrations similar to those described in the serum of those cases and controls. Although both mixtures caused a down-regulation of genes involved in the ATP binding process, our results clearly indicate that both mixtures may exert a very different effect on the gene expression profile of HMEC. Thus, while BC-mixture up-regulated the expression of oncogenes associated to breast cancer (GFRA1 and BHLHB8), the H-mixture down-regulated the expression of tumor suppressor genes (EPHA4 and EPHB2). Our results indicate that the composition of the OC mixture could play a role in the initiation processes of breast cancer. In addition, the present results suggest that subtle changes in the composition and levels of pollutants involved in environmentally relevant mixtures might induce very different biological effects, which explain, at least partially, why some mixtures seem to be more carcinogenic than others. Nonetheless, our findings confirm that environmentally relevant pollutants may modulate the expression of genes closely related to carcinogenic processes in the breast, reinforcing the role exerted by environment in the regulation of genes involved in breast carcinogenesis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
Subset Selection Procedures: A Review and an Assessment
1984-02-01
distance function (Alam and Rizvi, 1966; Gupta, 1966; Gupta and Studden, 1970), generalized variance ( Gnanadesikan and Gupta, 1970), and multiple... Gnanadesikan (1966) considered a location type procedure based on sample component means. Except in the case of bivariate normal, only a lower bound of the...Frischtak, 1973; Gnanadesikan , 1966) for ranking multivariate normal populations but the results in these cases are very limited in scope or are asymptotic
Mwanza, Jean-Claude; Warren, Joshua L; Hochberg, Jessica T; Budenz, Donald L; Chang, Robert T; Ramulu, Pradeep Y
2015-01-01
To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. One hundred ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike's information criterion (AIC), and prediction confidence interval lengths. For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDx-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT×NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single-variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAP-FDT, and interaction GDx-TSNIT×NAP-FDT consistently provided better discriminating abilities for detecting early, moderate, and severe glaucoma than the best single-variable models. The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDx-TSNIT×NAP-FDT provides the best glaucoma prediction compared with all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared with using GDx or FDT alone.
Bladder cancer diagnosis during cystoscopy using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Grimbergen, M. C. M.; van Swol, C. F. P.; Draga, R. O. P.; van Diest, P.; Verdaasdonk, R. M.; Stone, N.; Bosch, J. H. L. R.
2009-02-01
Raman spectroscopy is an optical technique that can be used to obtain specific molecular information of biological tissues. It has been used successfully to differentiate normal and pre-malignant tissue in many organs. The goal of this study is to determine the possibility to distinguish normal tissue from bladder cancer using this system. The endoscopic Raman system consists of a 6 Fr endoscopic probe connected to a 785nm diode laser and a spectral recording system. A total of 107 tissue samples were obtained from 54 patients with known bladder cancer during transurethral tumor resection. Immediately after surgical removal the samples were placed under the Raman probe and spectra were collected and stored for further analysis. The collected spectra were analyzed using multivariate statistical methods. In total 2949 Raman spectra were recorded ex vivo from cold cup biopsy samples with 2 seconds integration time. A multivariate algorithm allowed differentiation of normal and malignant tissue with a sensitivity and specificity of 78,5% and 78,9% respectively. The results show the possibility of discerning normal from malignant bladder tissue by means of Raman spectroscopy using a small fiber based system. Despite the low number of samples the results indicate that it might be possible to use this technique to grade identified bladder wall lesions during endoscopy.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
The Hb A variant (beta73 Asp-->Leu) disrupts Hb S polymerization by a novel mechanism.
Adachi, Kazuhiko; Ding, Min; Surrey, Saul; Rotter, Maria; Aprelev, Alexey; Zakharov, Mikhail; Weng, Weijun; Ferrone, Frank A
2006-09-22
Polymerization of a 1:1 mixture of hemoglobin S (Hb S) and the artificial mutant HbAbeta73Leu produces a dramatic morphological change in the polymer domains in 1.0 M phosphate buffer that are a characteristic feature of polymer formation. Instead of feathery domains with quasi 2-fold symmetry that characterize polymerization of Hb S and all previously known mixtures such as Hb A/S and Hb F/S mixtures, these domains are compact structures of quasi-spherical symmetry. Solubility of Hb S/Abeta73Leu mixtures was similar to that of Hb S/F mixtures. Kinetics of polymerization indicated that homogeneous nucleation rates of Hb S/Abeta73Leu mixtures were the same as those of Hb S/F mixtures, while exponential polymer growth (B) of Hb S/Abeta73Leu mixtures were about three times slower than those of Hb S/F mixtures. Differential interference contrast (DIC) image analysis also showed that fibers in the mixture appear to elongate between three and five times more slowly than in equivalent Hb S/F mixtures by direct measurements of exponential growth of mass of polymer in a domain. We propose that these results of Hb S/Abeta73Leu mixtures arise from a non-productive binding of the hybrid species of this mixture to the end of the growing polymer. This "cap" prohibits growth of polymers, but by nature is temporary, so that the net effect is a lowered growth rate of polymers. Such a cap is consistent with known features of the structure of the Hb S polymer. Domains would be more spherulitic because slower growth provides more opportunity for fiber bending to spread domains from their initial 2-fold symmetry. Moreover, since monomer depletion proceeds more slowly in this mixture, more homogeneous nucleation events occur, and the resulting gel has a far more granular character than normally seen in mixtures of non-polymerizing hemoglobins with Hb S. This mixture is likely to be less stiff than polymerized mixtures of other hybrids such as Hb S with HbF, potentially providing a novel approach to therapy.
Madureira, Tânia Vieira; Cruzeiro, Catarina; Rocha, Maria João; Rocha, Eduardo
2011-09-01
Fish embryos are a particularly vulnerable stage of development, so they represent optimal targets for screening toxicological effects of waterborne xenobiotics. Herein, the toxicity potential of two mixtures of pharmaceuticals was evaluated using a zebrafish embryo test. One of the mixtures corresponds to an environmentally realistic scenario and both have carbamazepine, fenofibric acid, propranolol, trimethoprim and sulfamethoxazole. The results evidenced morphological alterations, such as spinal deformities and yolk-sac oedemas. Moreover, heart rates decreased after both mixture exposures, e.g., at 48hpf, highest mixture versus blank control (47.8±4.9 and 55.8±3.7 beats/30s, respectively). The tail lengths also diminished significantly from 3208±145μm in blank control to 3130±126μm in highest mixture. The toxicological effects were concentration dependent. Mortality, hatching rate and the number of spontaneous movements were not affected. However, the low levels of pharmaceuticals did interfere with the normal development of zebrafish, which indicates risks for wild organisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
The aeromedical significance of sickle-cell trait : a review.
DOT National Transportation Integrated Search
1976-01-01
This report present some of the technical background necessary for understanding the aeromedical importance of sickle-cell disease and the sickle-trait carrier, whose erythrocytes contain mixtures of hemoglobin S and normal hemoglobin A. This carrier...
Engström, Wilhelm; Darbre, Philippa; Eriksson, Staffan; Gulliver, Linda; Hultman, Tove; Karamouzis, Michalis V.; Klaunig, James E.; Mehta, Rekha; Moorwood, Kim; Sanderson, Thomas; Sone, Hideko; Vadgama, Pankaj; Wagemaker, Gerard; Ward, Andrew; Singh, Neetu; Al-Mulla, Fahd; Al-Temaimi, Rabeah; Amedei, Amedeo; Colacci, Anna Maria; Vaccari, Monica; Mondello, Chiara; Scovassi, A. Ivana; Raju, Jayadev; Hamid, Roslida A.; Memeo, Lorenzo; Forte, Stefano; Roy, Rabindra; Woodrick, Jordan; Salem, Hosni K.; Ryan, Elizabeth; Brown, Dustin G.; Bisson, William H.
2015-01-01
The aim of this work is to review current knowledge relating the established cancer hallmark, sustained cell proliferation to the existence of chemicals present as low dose mixtures in the environment. Normal cell proliferation is under tight control, i.e. cells respond to a signal to proliferate, and although most cells continue to proliferate into adult life, the multiplication ceases once the stimulatory signal disappears or if the cells are exposed to growth inhibitory signals. Under such circumstances, normal cells remain quiescent until they are stimulated to resume further proliferation. In contrast, tumour cells are unable to halt proliferation, either when subjected to growth inhibitory signals or in the absence of growth stimulatory signals. Environmental chemicals with carcinogenic potential may cause sustained cell proliferation by interfering with some cell proliferation control mechanisms committing cells to an indefinite proliferative span. PMID:26106143
Catalytic distillation process
Smith, Jr., Lawrence A.
1982-01-01
A method for conducting chemical reactions and fractionation of the reaction mixture comprising feeding reactants to a distillation column reactor into a feed zone and concurrently contacting the reactants with a fixed bed catalytic packing to concurrently carry out the reaction and fractionate the reaction mixture. For example, a method for preparing methyl tertiary butyl ether in high purity from a mixed feed stream of isobutene and normal butene comprising feeding the mixed feed stream to a distillation column reactor into a feed zone at the lower end of a distillation reaction zone, and methanol into the upper end of said distillation reaction zone, which is packed with a properly supported cationic ion exchange resin, contacting the C.sub.4 feed and methanol with the catalytic distillation packing to react methanol and isobutene, and concurrently fractionating the ether from the column below the catalytic zone and removing normal butene overhead above the catalytic zone.
Catalytic distillation process
Smith, L.A. Jr.
1982-06-22
A method is described for conducting chemical reactions and fractionation of the reaction mixture comprising feeding reactants to a distillation column reactor into a feed zone and concurrently contacting the reactants with a fixed bed catalytic packing to concurrently carry out the reaction and fractionate the reaction mixture. For example, a method for preparing methyl tertiary butyl ether in high purity from a mixed feed stream of isobutene and normal butene comprising feeding the mixed feed stream to a distillation column reactor into a feed zone at the lower end of a distillation reaction zone, and methanol into the upper end of said distillation reaction zone, which is packed with a properly supported cationic ion exchange resin, contacting the C[sub 4] feed and methanol with the catalytic distillation packing to react methanol and isobutene, and concurrently fractionating the ether from the column below the catalytic zone and removing normal butene overhead above the catalytic zone.
Accumulation risk assessment for the flooding hazard
NASA Astrophysics Data System (ADS)
Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto
2010-05-01
One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.
Normal uniform mixture differential gene expression detection for cDNA microarrays
Dean, Nema; Raftery, Adrian E
2005-01-01
Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807
Braking System Integration in Dual Mode Systems
DOT National Transportation Integrated Search
1974-05-01
An optimal braking system for Dual Mode is a complex product of vast number of multivariate, interdependent parameters that encompass on-guideway and off-guideway operation as well as normal and emergency braking. : Details of, and interralations amo...
Optimal False Discovery Rate Control for Dependent Data
Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe
2013-01-01
This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870
Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne
2018-01-01
ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679
1976-01-01
Experimental and Pre- 9 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% N2 Mixture 3 Comparison of the Normalized Experimental...and Pre- 10 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% A~ Mixture 4 Predictions of the Temporal Variation of Small...Z o < z CD o o ÜJ 10 -7 .1 D4862 II i i r~rT"T pco (Torr) ♦ 700 O 350 A 200 O 100 + i & i J I \\ I I I 1.0 AVERAUü
Step-wise supercritical extraction of carbonaceous residua
Warzinski, Robert P.
1987-01-01
A method of fractionating a mixture containing high boiling carbonaceous material and normally solid mineral matter includes processing with a plurality of different supercritical solvents. The mixture is treated with a first solvent of high critical temperature and solvent capacity to extract a large fraction as solute. The solute is released as liquid from solvent and successively treated with other supercritical solvents of different critical values to extract fractions of differing properties. Fractionation can be supplemented by solute reflux over a temperature gradient, pressure let down in steps and extractions at varying temperature and pressure values.
Infurna, Frank J; Grimm, Kevin J
2017-12-15
Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Molecular Simulation of the Vapor-Liquid Phase Behavior of Lennard-Jones Mixtures in Porous Solids
2006-09-01
sur la Catalyse, Centre National de la Recherche Scientifique, Group de Chimie Theorique, 2 Avenue Albert Einstein, 69626 Villeurbanne Cedex, France...and Group de Chimie Theorique, Ecole Normale Superieure de Lyon, 46 Allee d’Italie, 69364 Lyon, Cedex 07, France 14. ABSTRACT We present vapor...Scientifique, Group de Chimie Theorique, 2 Avenue Albert Einstein, 69626 Villeurbanne Cedex, France and Group de Chimie Theorique, Ecole Normale
Viscosity Difference Measurements for Normal and Para Liquid Hydrogen Mixtures
NASA Technical Reports Server (NTRS)
Webeler, R.; Bedard, F.
1961-01-01
The absence of experimental data in the literature concerning a viscosity difference for normal and equilibrium liquid hydrogen may be attributed to the limited reproducibility of "oscillating disk" measurements in a liquid-hydrogen environment. Indeed, there is disagreement over the viscosity values for equilibrium liquid hydrogen even without proton spin considerations. Measurements presented here represent the first application of the piezoelectric alpha quartz torsional oscillator technique to liquid-hydrogen viscosity measurements.
Yan, Luchun; Liu, Jiemin; Jiang, Shen; Wu, Chuandong; Gao, Kewei
2017-07-13
The olfactory evaluation function (e.g., odor intensity rating) of e-nose is always one of the most challenging issues in researches about odor pollution monitoring. But odor is normally produced by a set of stimuli, and odor interactions among constituents significantly influenced their mixture's odor intensity. This study investigated the odor interaction principle in odor mixtures of aldehydes and esters, respectively. Then, a modified vector model (MVM) was proposed and it successfully demonstrated the similarity of the odor interaction pattern among odorants of the same type. Based on the regular interaction pattern, unlike a determined empirical model only fit for a specific odor mixture in conventional approaches, the MVM distinctly simplified the odor intensity prediction of odor mixtures. Furthermore, the MVM also provided a way of directly converting constituents' chemical concentrations to their mixture's odor intensity. By combining the MVM with usual data-processing algorithm of e-nose, a new e-nose system was established for an odor intensity rating. Compared with instrumental analysis and human assessor, it exhibited accuracy well in both quantitative analysis (Pearson correlation coefficient was 0.999 for individual aldehydes ( n = 12), 0.996 for their binary mixtures ( n = 36) and 0.990 for their ternary mixtures ( n = 60)) and odor intensity assessment (Pearson correlation coefficient was 0.980 for individual aldehydes ( n = 15), 0.973 for their binary mixtures ( n = 24), and 0.888 for their ternary mixtures ( n = 25)). Thus, the observed regular interaction pattern is considered an important foundation for accelerating extensive application of olfactory evaluation in odor pollution monitoring.
21 CFR 520.903b - Febantel suspension.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...
21 CFR 520.903b - Febantel suspension.
Code of Federal Regulations, 2014 CFR
2014-04-01
...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...
21 CFR 520.903b - Febantel suspension.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...
21 CFR 520.903b - Febantel suspension.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...
Arm structure in normal spiral galaxies, 1: Multivariate data for 492 galaxies
NASA Technical Reports Server (NTRS)
Magri, Christopher
1994-01-01
Multivariate data have been collected as part of an effort to develop a new classification system for spiral galaxies, one which is not necessarily based on subjective morphological properties. A sample of 492 moderately bright northern Sa and Sc spirals was chosen for future statistical analysis. New observations were made at 20 and 21 cm; the latter data are described in detail here. Infrared Astronomy Satellite (IRAS) fluxes were obtained from archival data. Finally, new estimates of arm pattern radomness and of local environmental harshness were compiled for most sample objects.
Huizenga, Hilde M; Crone, Eveline A; Jansen, Brenda J
2007-11-01
In the standard Iowa Gambling Task (IGT), participants have to choose repeatedly from four options. Each option is characterized by a constant gain, and by the frequency and amount of a probabilistic loss. Crone and van der Molen (2004) reported that school-aged children and even adolescents show marked deficits in IGT performance. In this study, we have re-analyzed the data with a multivariate normal mixture analysis to show that these developmental changes can be explained by a shift from unidimensional to multidimensional proportional reasoning (Siegler, 1981; Jansen & van der Maas, 2002). More specifically, the results show a gradual shift with increasing age from (a) guessing with a slight tendency to consider frequency of loss to (b) focusing on frequency of loss, to (c) considering both frequency and amount of probabilistic loss. In the latter case, participants only considered options with low-frequency loss and then chose the option with the lowest amount of loss. Performance improved in a reversed task, in which punishment was placed up front and gain was delivered unexpectedly. In this reversed task, young children are guessing with already a slight tendency to consider both the frequency and amount of gain; this strategy becomes more pronounced with age. We argue that these findings have important implications for the interpretation of IGT performance, as well as for methods to analyze this performance.
A mixed model framework for teratology studies.
Braeken, Johan; Tuerlinckx, Francis
2009-10-01
A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.
MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION
Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Magnetic Reconnection and Modification of the Hall Physics Due to Cold Ions at the Magnetopause
NASA Technical Reports Server (NTRS)
Andre, M.; Li, W.; Toldeo-Redondo, S.; Khotyaintsev, Yu. V.; Vaivads, A.; Graham, D. B.; Norgren, C.; Burch, J.; Lindqvist, P.-A.; Marklund, G.;
2016-01-01
Observations by the four Magnetospheric Multiscale spacecraft are used to investigate the Hall physics of a magnetopause magnetic reconnection separatrix layer. Inside this layer of currents and strong normal electric fields, cold (eV) ions of ionospheric origin can remain frozen-in together with the electrons. The cold ions reduce the Hall current. Using a generalized Ohms law, the electric field is balanced by the sum of the terms corresponding to the Hall current, the v x B drifting cold ions, and the divergence of the electron pressure tensor. A mixture of hot and cold ions is common at the subsolar magnetopause. A mixture of length scales caused by a mixture of ion temperatures has significant effects on the Hall physics of magnetic reconnection.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Hyperspectral target detection using heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2009-09-01
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Monteagudo, J M; Durán, A; Aguirre, M; San Martín, I
2011-01-15
The mineralization of solutions containing a mixture of three phenolic compounds, gallic, p-coumaric and protocatechuic acids, in a ferrioxalate-induced solar photo-Fenton process was investigated. The reactions were carried out in a pilot plant consisting of a compound parabolic collector (CPC) solar reactor. An optimization study was performed combining a multivariate experimental design and neuronal networks that included the following variables: pH, temperature, solar power, air flow and initial concentrations of H(2)O(2), Fe(II) and oxalic acid. Under optimal conditions, total elimination of the original compounds and 94% TOC removal of the mixture were achieved in 5 and 194 min, respectively. pH and initial concentrations of H(2)O(2) and Fe(II) were the most significant factors affecting the mixture mineralization. The molar correlation between consumed hydrogen peroxide and removed TOC was always between 1 and 3. A detailed analysis of the reaction was presented. The values of the pseudo-first-order mineralization kinetic rate constant, k(TOC), increased as initial Fe(II) and H(2)O(2) concentrations and temperature increased. The optimum pH value also slightly increased with greater Fe(II) and hydrogen peroxide concentrations but decreased when temperature increased. OH and O(2)(-) radicals were the main oxidative intermediate species in the process, although singlet oxygen ((1)O(2)) also played a role in the mineralization reaction. Copyright © 2010 Elsevier B.V. All rights reserved.
Wu, Yuping; Bi, Yanfeng; Bingga, Gali; Li, Xiaowei; Zhang, Suxia; Li, Jiancheng; Li, Hui; Ding, Shuangyang; Xia, Xi
2015-06-26
The illegal use of β2-agonists in livestock production was previously detected by efficient methods based on mass spectrometry to control the residues of these drugs. Nevertheless, such methods still remain a challenging task for authorities who monitor these residues because the use of "cocktails" composed of mixtures of low amounts of several substances as well as the synthesis of new compounds of unknown structure prevent efficient prevention of illegal use of growth-promoting agents. Here, we outlined a metabolomics-based strategy for detecting the use of "cocktails" composed of mixtures of low amounts of three β2-agonists via urine profiling. Urine profiles of controls and swine treated with mixture of low amounts of three substances (clenbuterol, salbutamol, and ractopamine) were analyzed with ultra-high performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry. The metabolic differences between controls and β2-agonists-treated groups were compared using multivariate data analysis. Fourteen metabolites were identified related with the β2-agonists treatment, while two co-biomarkers, 2-indolecarboxylic acid and fluorometholone acetate, either in single or "cocktails" of low-dose mixture of clenbuterol, salbutamol, and ractopamine, could be considered as diagnostic markers for the detection of illegal use of β2-agonists. The results of depletion study demonstrated that it is practical to use the markers for monitoring of β2-agonists. Copyright © 2015 Elsevier B.V. All rights reserved.
Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P
2014-08-06
Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.
Brodeur, Julie Céline; Malpel, Solène; Anglesio, Ana Belén; Cristos, Diego; D'Andrea, María Florencia; Poliserpi, María Belén
2016-07-01
Although pesticide contamination of surface waters normally occurs in the form of mixtures, the toxicity and interactions displayed by such mixtures have been little characterized until now. The present study examined the interactions prevailing in equitoxic and non-equitoxic binary mixtures of formulations of glyphosate (Glifoglex(®)) and cypermethrin (Glextrin(®)) to the tenspotted livebearer (Cnesterodon decemmaculatus), a widely distributed South American fish. The following 96 h-LC50s were obtained when pesticide formulations were tested individually: Glifoglex(®) 41.4 and 53 mg ae glyphosate/L; Glextrin(®) 1.89 and 2.60 μg cypermethrin/L. Equitoxic and non-equitoxic mixtures were significantly antagonic in all combinations tested. The magnitude of the antagonism (factor by which toxicity differed from concentration addition) varied between 1.37 and 3.09 times in the different non-equitoxic mixtures tested. Antagonism was due to a strong inhibition of cypermethrin toxicity by the glyphosate formulation, the toxicity of the cypermethrin-based pesticide being almost completely overridden by the glyphosate formulation. Results obtained in the current study with fish are radically opposite to those previously observed in tadpoles where synergy was observed when Glifoglex(®) and Glextrin(®) were present in mixtures (Brodeur et al., 2014). Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Limantara, A. D.; Widodo, A.; Winarto, S.; Krisnawati, L. D.; Mudjanarko, S. W.
2018-04-01
The use of natural gravel (rivers) as concrete mixtures is rarely encountered after days of demands for a higher strength of concrete. Moreover, today people have found High-Performance Concrete which, when viewed from the rough aggregate consisted mostly of broken stone, although the fine grain material still used natural sand. Is it possible that a mixture of concrete using natural gravel as a coarse aggregate is capable of producing concrete with compressive strength equivalent to a concrete mixture using crushed stone? To obtain information on this, a series of tests on concrete mixes with crude aggregates of Kalitelu Crusher, Gondang, Tulungagung and natural stone (river gravel) from the Brantas River, Ngujang, Tulungagung in the Materials Testing Laboratory Tugu Dam Construction Project, Kab. Trenggalek. From concrete strength test results using coarse material obtained value 19.47 Mpa, while the compressive strength of concrete with a mixture of crushed stone obtained the value of 21.12 Mpa.
Effect of water on self-assembled tubules in β-sitosterol + γ-oryzanol-based organogels
NASA Astrophysics Data System (ADS)
den Adel, Ruud; Heussen, Patricia C. M.; Bot, Arjen
2010-10-01
Mixtures of β-sitosterol and γ-oryzanol form a network in triglyceride oil that may serve as an alternative to the network of small crystallites of triglycerides occurring in regular oil structuring. The present x-ray diffraction study investigates the relation between the crystal forms of the individual compounds and the mixture in oil, water and emulsion. β-Sitosterol and γ-oryzanol form normal crystals in oil, in water, or in emulsions. The crystals are sensitive to the presence of water. The mixture of β-sitosterol + γ-oryzanol forms crystals in water and emulsions that can be traced back to the crystals of the pure compounds. Only in oil, a completely different structure emerges in the mixture of β-sitosterol + γ-oryzanol, which bears no relation to the structures that are formed by both individual compounds, and which can be identified as a self-assembled tubule (diameter 7.2±0.1 nm, wall thickness 0.8±0.2 nm).
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...
40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...
40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
Multivariate frequency domain analysis of protein dynamics
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Fuchigami, Sotaro; Kidera, Akinori
2009-03-01
Multivariate frequency domain analysis (MFDA) is proposed to characterize collective vibrational dynamics of protein obtained by a molecular dynamics (MD) simulation. MFDA performs principal component analysis (PCA) for a bandpass filtered multivariate time series using the multitaper method of spectral estimation. By applying MFDA to MD trajectories of bovine pancreatic trypsin inhibitor, we determined the collective vibrational modes in the frequency domain, which were identified by their vibrational frequencies and eigenvectors. At near zero temperature, the vibrational modes determined by MFDA agreed well with those calculated by normal mode analysis. At 300 K, the vibrational modes exhibited characteristic features that were considerably different from the principal modes of the static distribution given by the standard PCA. The influences of aqueous environments were discussed based on two different sets of vibrational modes, one derived from a MD simulation in water and the other from a simulation in vacuum. Using the varimax rotation, an algorithm of the multivariate statistical analysis, the representative orthogonal set of eigenmodes was determined at each vibrational frequency.
Realized Volatility Analysis in A Spin Model of Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.
Mwanza, Jean-Claude; Warren, Joshua L.; Hochberg, Jessica T.; Budenz, Donald L.; Chang, Robert T.; Ramulu, Pradeep Y.
2014-01-01
Purpose To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. Methods One hundred and ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike’s information criterion (AIC), and prediction confidence interval lengths (PIL). Results For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDX-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT * NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAPFDT, and interaction GDx-TSNIT*NAP-FDT consistently provided better discriminating abilities for detecting early, moderate and severe glaucoma than the best single variable models. Conclusions The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDX-TSNIT * NAP-FDT provides the best glaucoma prediction compared to all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared to using GDx or FDT alone. PMID:24777046
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Are mammal olfactory signals hiding right under our noses?
NASA Astrophysics Data System (ADS)
Apps, Peter James
2013-06-01
Chemical communication via olfactory semiochemicals plays a central role in the social behaviour and reproduction of mammals, but even after four decades of research, only a few mammal semiochemicals have been chemically characterized. Expectations that mammal chemical signals are coded by quantitative relationships among multiple components have persisted since the earliest studies of mammal semiochemistry, and continue to direct research strategies. Nonetheless, the chemistry of mammal excretions and secretions and the characteristics of those semiochemicals that have been identified show that mammal semiochemicals are as likely to be single compounds as to be mixtures, and are as likely to be coded by the presence and absence of chemical compounds as by their quantities. There is very scant support for the view that mammal semiochemicals code signals as specific ratios between components, and no evidence that they depend on a Gestalt or a chemical image. Of 31 semiochemicals whose chemical composition is known, 15 have a single component and 16 are coded by presence/absence, one may depend on a ratio between two compounds and none of them are chemical images. The expectation that mammal chemical signals have multiple components underpins the use of multivariate statistical analyses of chromatographic data, but the ways in which multivariate statistics are commonly used to search for active mixtures leads to single messenger compounds and signals that are sent by the presence and absence of compounds being overlooked. Research on mammal semiochemicals needs to accommodate the possibility that simple qualitative differences are no less likely than complex quantitative differences to encode chemical signals.
2016-04-07
Multivariate UV-spectrophotometric methods and Quality by Design (QbD) HPLC are described for concurrent estimation of avanafil (AV) and dapoxetine (DP) in the binary mixture and in the dosage form. Chemometric methods have been developed, including classical least-squares, principal component regression, partial least-squares, and multiway partial least-squares. Analytical figures of merit, such as sensitivity, selectivity, analytical sensitivity, LOD, and LOQ were determined. QbD consists of three steps, starting with the screening approach to determine the critical process parameter and response variables. This is followed by understanding of factors and levels, and lastly the application of a Box-Behnken design containing four critical factors that affect the method. From an Ishikawa diagram and a risk assessment tool, four main factors were selected for optimization. Design optimization, statistical calculation, and final-condition optimization of all the reactions were Carried out. Twenty-five experiments were done, and a quadratic model was used for all response variables. Desirability plot, surface plot, design space, and three-dimensional plots were calculated. In the optimized condition, HPLC separation was achieved on Phenomenex Gemini C18 column (250 × 4.6 mm, 5 μm) using acetonitrile-buffer (ammonium acetate buffer at pH 3.7 with acetic acid) as a mobile phase at flow rate of 0.7 mL/min. Quantification was done at 239 nm, and temperature was set at 20°C. The developed methods were validated and successfully applied for simultaneous determination of AV and DP in the dosage form.
Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test.
Pszczola, Marek; Jaczewski, Mariusz; Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary
2018-01-10
Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from -20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis.
Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test
Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary
2018-01-01
Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from −20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis. PMID:29320443
Code of Federal Regulations, 2014 CFR
2014-07-01
... enough electrical or thermal energy to ignite a flammable mixture of the most easily ignitable composition. Intrinsically safe means incapable of releasing enough electrical or thermal energy under normal... portable cables may be connected to a source of electrical energy, and which contains a short-circuit...
COLLAPSE OF A FISH POPULATION FOLLOWING EXPOSURE TO A SYNTHETIC ESTROGEN
Municipal wastewaters are a complex mixture containing estrogens and estrogen mimics that are known to affect the reproductive health of wild fishes. Male fishes downstream of some wastewater outfalls produce vitellogenin (VTG) (a protein normally synthesized by females during oo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
Preparation and evaluation of posaconazole-loaded enteric microparticles in rats.
Yang, Min; Dong, Zhonghua; Zhang, Yongchun; Zhang, Fang; Wang, Yongjie; Zhao, Zhongxi
2017-04-01
Posaconazole (POS) is an antifungal compound which has a low oral bioavailability. The aim of this study was to prepare POS enteric microparticles to enhance its oral bioavailability. POS enteric microparticles were prepared with hypromellose acetate succinate (HPMCAS) via the spray drying method. The solvent mixtures of acetone and ethanol used in the preparation of the microparticles were optimized to produce the ideal POS enteric microparticles. Multivariate data analysis using a principal component analysis (PCA) was used to find the relationship among the HPMCAS molecular characteristics, particle properties and drug release kinetics from the spray dried microparticles. The optimal spray solvent mixtures were critical to produce the POS microparticles with the defined polymer entanglement index, drug surface enrichment, particle size and drug loading. The HPMCAS molecular characteristics affected the microscopic connectivity and diffusivity of polymer matrix and eventually influenced the drug release behavior, and enhanced the bioavailability of POS. These studies suggested that the selection of suitable solvent mixtures of acetone and ethanol used in the spray drying of the microparticles was quite important to produce the entangled polymer structures with preferred polymer molecular properties of polymer coiling, overlap concentration and entanglement index. Additional studies on particle size and surface drug enrichment eventually produced HPMCAS-based enteric microparticles to enhance the oral bioavailability of POS.
Rodríguez, N; Ortiz, M C; Sarabia, L; Gredilla, E
2010-04-15
To prevent possible frauds and give more protection to companies and consumers it is necessary to control that the types of milk used in the elaboration of dairy products correspond to those appearing in their label. Therefore, it is greatly interesting to have efficient, quick and cheap methods of analysis to identify them. In the present work, the multivariate data are the protein chromatographic profiles of cheese and milk extracts, obtained by high-performance liquid chromatography with diode-array detection (HPLC-DAD). These data correspond to pure samples of bovine, ovine and caprine milk, and also to binary and ternary mixtures. The structure of the data is studied through principal component analysis (PCA), whereas the percentage of each kind of milk has been determined by a partial least squares (PLS) calibration model. In cheese elaborated with mixtures of milk, the procedure employed allows one to detect 3.92, 2.81 and 1.47% of ovine, caprine and bovine milk, respectively, when the probability of false non-compliance is fixed at 0.05. These percentages reach 7.72, 5.52 and 2.89%, respectively, when both the probability of false non-compliance and false compliance are fixed at 0.05. (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti
2017-12-01
This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.
[Phospholipids under combined ozone-oxygen administration].
Müller-Tyl, E; Hernuss, P; Salzer, H; Reisinger, L; Washüttl, J; Wurst, F
1975-01-01
The parenterally application of oxygen-ozone gas mixture gives good resultats in the treatment of various deseases. Ozone seems to influence the metabolic process of fat, so it was of interest to analyse this influence especially to phospholipids. 40 women with gynaecological cancer got 10 ml oxygen-ozone gas mixture with a content of 450 gamma ozone into the cubital vene. Venous blood was removed before and 10 minutes after application and the level of lecithin, lysolecithin, cephalin and spingomyelin was determined by the method of Randerath. A decrease of all four substances was obvious, although all values remained in normal range.
Hydrogen-Helium shock Radiation tests for Saturn Entry Probes
NASA Technical Reports Server (NTRS)
Cruden, Brett A.
2016-01-01
This paper describes the measurement of shock layer radiation in Hydrogen/Helium mixtures representative of that encountered by probes entering the Saturn atmosphere. Normal shock waves are measured in Hydrogen-Helium mixtures (89:11% by volume) at freestream pressures between 13-66 Pa (0.1-0.5 Torr) and velocities from 20-30 km/s. Radiance is quantified from the Vacuum Ultraviolet through Near Infrared. An induction time of several centimeters is observed where electron density and radiance remain well below equilibrium. Radiance is observed in front of the shock layer, the characteristics of which match the expected diffusion length of Hydrogen.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Malek, H.
1978-01-01
A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.
Carlesi, Serena; Ricci, Marilena; Cucci, Costanza; La Nasa, Jacopo; Lofrumento, Cristiana; Picollo, Marcello; Becucci, Maurizio
2015-07-01
This work explores the application of chemometric techniques to the analysis of lipidic paint binders (i.e., drying oils) by means of Raman and near-infrared spectroscopy. These binders have been widely used by artists throughout history, both individually and in mixtures. We prepared various model samples of the pure binders (linseed, poppy seed, and walnut oils) obtained from different manufacturers. These model samples were left to dry and then characterized by Raman and reflectance near-infrared spectroscopy. Multivariate analysis was performed by applying principal component analysis (PCA) on the first derivative of the corresponding Raman spectra (1800-750 cm(-1)), near-infrared spectra (6000-3900 cm(-1)), and their combination to test whether spectral differences could enable samples to be distinguished on the basis of their composition. The vibrational bands we found most useful to discriminate between the different products we studied are the fundamental ν(C=C) stretching and methylenic stretching and bending combination bands. The results of the multivariate analysis demonstrated the potential of chemometric approaches for characterizing and identifying drying oils, and also for gaining a deeper insight into the aging process. Comparison with high-performance liquid chromatography data was conducted to check the PCA results.
Liguori, Lucia; Bjørsvik, Hans-René
2012-12-01
The development of a multivariate study for a quantitative analysis of six different polybrominated diphenyl ethers (PBDEs) in tissue of Atlantic Salmo salar L. is reported. An extraction, isolation, and purification process based on an accelerated solvent extraction system was designed, investigated, and optimized by means of statistical experimental design and multivariate data analysis and regression. An accompanying gas chromatography-mass spectrometry analytical method was developed for the identification and quantification of the analytes, BDE 28, BDE 47, BDE 99, BDE 100, BDE 153, and BDE 154. These PBDEs have been used in commercial blends that were used as flame-retardants for a variety of materials, including electronic devices, synthetic polymers and textiles. The present study revealed that an extracting solvent mixture composed of hexane and CH₂Cl₂ (10:90) provided excellent recoveries of all of the six PBDEs studied herein. A somewhat lower polarity in the extracting solvent, hexane and CH₂Cl₂ (40:60) decreased the analyte %-recoveries, which still remain acceptable and satisfactory. The study demonstrates the necessity to perform an intimately investigation of the extraction and purification process in order to achieve quantitative isolation of the analytes from the specific matrix. Copyright © 2012 Elsevier B.V. All rights reserved.
Modification of Gaussian mixture models for data classification in high energy physics
NASA Astrophysics Data System (ADS)
Štěpánek, Michal; Franc, Jiří; Kůs, Václav
2015-01-01
In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
Wang, Jun; Kliks, Michael M; Jun, Soojin; Jackson, Mel; Li, Qing X
2010-03-01
Quantitative analysis of glucose, fructose, sucrose, and maltose in different geographic origin honey samples in the world using the Fourier transform infrared (FTIR) spectroscopy and chemometrics such as partial least squares (PLS) and principal component regression was studied. The calibration series consisted of 45 standard mixtures, which were made up of glucose, fructose, sucrose, and maltose. There were distinct peak variations of all sugar mixtures in the spectral "fingerprint" region between 1500 and 800 cm(-1). The calibration model was successfully validated using 7 synthetic blend sets of sugars. The PLS 2nd-derivative model showed the highest degree of prediction accuracy with a highest R(2) value of 0.999. Along with the canonical variate analysis, the calibration model further validated by high-performance liquid chromatography measurements for commercial honey samples demonstrates that FTIR can qualitatively and quantitatively determine the presence of glucose, fructose, sucrose, and maltose in multiple regional honey samples.
TØ, Bechshøft; Sonne, C; Dietz, R; Born, EW; Muir, DCG; Letcher, RJ; Novak, MA; Henchey, E; Meyer, JS; Jenssen, BM; Villanger, GD
2012-01-01
The multivariate relationship between hair cortisol, whole blood thyroid hormones, and the complex mixtures of organohalogen contaminant (OHC) levels measured in subcutaneous adipose of 23 East Greenland polar bears (eight males and 15 females, all sampled between the years 1999 and 2001) was analyzed using projection to latent structure (PLS) regression modeling. In the resulting PLS model, most important variables with a negative influence on cortisol levels were particularly BDE-99, but also CB-180, -201, BDE-153, and CB-170/190. The most important variables with a positive influence on cortisol were CB-66/95, α-HCH, TT3, as well as heptachlor epoxide, dieldrin, BDE-47, p,p′-DDD. Although statistical modeling does not necessarily fully explain biological cause-effect relationships, relationships indicate that (1) the hypothalamic-pituitary-adrenal (HPA) axis in East Greenland polar bears is likely to be affected by OHC-contaminants and (2) the association between OHCs and cortisol may be linked with the hypothalamus-pituitary-thyroid (HPT) axis. PMID:22575327
Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.
ERIC Educational Resources Information Center
Reddon, John R.; And Others
1985-01-01
Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)
The Multivariate Largest Lyapunov Exponent as an Age-Related Metric of Quiet Standing Balance
Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang
2015-01-01
The largest Lyapunov exponent has been researched as a metric of the balance ability during human quiet standing. However, the sensitivity and accuracy of this measurement method are not good enough for clinical use. The present research proposes a metric of the human body's standing balance ability based on the multivariate largest Lyapunov exponent which can quantify the human standing balance. The dynamic multivariate time series of ankle, knee, and hip were measured by multiple electrical goniometers. Thirty-six normal people of different ages participated in the test. With acquired data, the multivariate largest Lyapunov exponent was calculated. Finally, the results of the proposed approach were analysed and compared with the traditional method, for which the largest Lyapunov exponent and power spectral density from the centre of pressure were also calculated. The following conclusions can be obtained. The multivariate largest Lyapunov exponent has a higher degree of differentiation in differentiating balance in eyes-closed conditions. The MLLE value reflects the overall coordination between multisegment movements. Individuals of different ages can be distinguished by their MLLE values. The standing stability of human is reduced with the increment of age. PMID:26064182
Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel
2013-03-15
An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. Published by Elsevier B.V.
Extracting chemical information from high-resolution Kβ X-ray emission spectroscopy
NASA Astrophysics Data System (ADS)
Limandri, S.; Robledo, J.; Tirao, G.
2018-06-01
High-resolution X-ray emission spectroscopy allows studying the chemical environment of a wide variety of materials. Chemical information can be obtained by fitting the X-ray spectra and observing the behavior of some spectral features. Spectral changes can also be quantified by means of statistical parameters calculated by considering the spectrum as a probability distribution. Another possibility is to perform statistical multivariate analysis, such as principal component analysis. In this work the performance of these procedures for extracting chemical information in X-ray emission spectroscopy spectra for mixtures of Mn2+ and Mn4+ oxides are studied. A detail analysis of the parameters obtained, as well as the associated uncertainties is shown. The methodologies are also applied for Mn oxidation state characterization of double perovskite oxides Ba1+xLa1-xMnSbO6 (with 0 ≤ x ≤ 0.7). The results show that statistical parameters and multivariate analysis are the most suitable for the analysis of this kind of spectra.
Kandelbauer, A; Kessler, W; Kessler, R W
2008-03-01
The laccase-catalysed transformation of indigo carmine (IC) with and without a redox active mediator was studied using online UV-visible spectroscopy. Deconvolution of the mixture spectra obtained during the reaction was performed on a model-free basis using multivariate curve resolution (MCR). Thereby, the time courses of educts, products, and reaction intermediates involved in the transformation were reconstructed without prior mechanistic assumptions. Furthermore, the spectral signature of a reactive intermediate which could not have been detected by a classical hard-modelling approach was extracted from the chemometric analysis. The findings suggest that the combined use of UV-visible spectroscopy and MCR may lead to unexpectedly deep mechanistic evidence otherwise buried in the experimental data. Thus, although rather an unspecific method, UV-visible spectroscopy can prove useful in the monitoring of chemical reactions when combined with MCR. This offers a wide range of chemists a cheap and readily available, highly sensitive tool for chemical reaction online monitoring.
Taheri, Mohammadreza; Moazeni-Pourasil, Roudabeh Sadat; Sheikh-Olia-Lavasani, Majid; Karami, Ahmad; Ghassempour, Alireza
2016-03-01
Chromatographic method development for preparative targets is a time-consuming and subjective process. This can be particularly problematic because of the use of valuable samples for isolation and the large consumption of solvents in preparative scale. These processes could be improved by using statistical computations to save time, solvent and experimental efforts. Thus, contributed by ESI-MS, after applying DryLab software to gain an overview of the most effective parameters in separation of synthesized celecoxib and its co-eluted compounds, design of experiment software that relies on multivariate modeling as a chemometric approach was used to predict the optimized touching-band overloading conditions by objective functions according to the relationship between selectivity and stationary phase properties. The loadability of the method was investigated on the analytical and semi-preparative scales, and the performance of this chemometric approach was approved by peak shapes beside recovery and purity of products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tvermoes, Brooke E., E-mail: brooke.tvermoes@cardn
The objective of this preliminary study was to evaluate the threshold for immune stimulation in mice following local exposure to metal particles and ions representative of normal-functioning cobalt-chromium (CoCr) metal-on-metal (MoM) hip implants. The popliteal lymph node assay (PLNA) was used in this study to assess immune responses in BALB/c mice following treatment with chromium-oxide (Cr{sub 2}O{sub 3}) particles, metal salts (CoCl{sub 2}, CrCl{sub 3} and NiCl{sub 2}), or Cr{sub 2}O{sub 3} particles together with metal salts using single-dose exposures representing approximately 10 days (0.000114 mg), 19 years (0.0800 mg), and 40 years (0.171 mg) of normal implant wear. Themore » immune response elicited following treatment with Cr{sub 2}O{sub 3} particles together with metal salts was also assessed at four additional doses equivalent to approximately 1.5 months (0.0005 mg), 0.6 years (0.0025 mg), 2.3 years (0.01 mg), and 9.3 years (0.04 mg) of normal implant wear. Mice were injected subcutaneously (50 μL) into the right hind foot with the test article, or with the relevant vehicle control. The proliferative response of the draining lymph node cells (LNC) was measured four days after treatment, and stimulation indices (SI) were derived relative to vehicle controls. The PLNA was negative (SI < 3) for all Cr{sub 2}O{sub 3} particle doses, and was also negative at the lowest dose of the metal salt mixture, and the lowest four doses of the Cr{sub 2}O{sub 3} particles with metal salt mixture. The PLNA was positive (SI > 3) at the highest two doses of the metal salt mixture and the highest three doses of the Cr{sub 2}O{sub 3} particles with the metal salt mixture. The provisional NOAEL and LOAEL values identified in this study for immune activation corresponds to Co and Cr concentrations in the synovial fluid approximately 500 and 2000 times higher than that reported for normal-functioning MoM hip implants, respectively. Overall, these results indicate that normal wear conditions are unlikely to result in immune stimulation in individuals not previously sensitized to metals. - Highlights: • Immune responses in mice were assessed following treatment with Cr2O3 particles with metal salts. • The PLNA was negative (SI < 3) for all Cr2O3 particle doses. • A LOAEL for immune activation was identified at 0.04 mg of metal particles with metal salts. • A NOAEL for immune activation was identified at 0.01 mg of metal particles with metal salts.« less
Yang, R.
1993-08-01
Toxicity studies were performed with pesticide and fertilizer mixtures representative of groundwater contamination found in California and Iowa. The California mixture was composed of aldicarb, atrazine, 1,2-dibromo-3-chloropropane, 1,2- dichloropropane, ethylene dibromide, simazine, and ammonium nitrate. The Iowa mixture contained alachlor, atrazine, cyanazine, metolachlor, metribuzin, and ammonium nitrate. The mixtures were administered in drinking water (with 512 ppm propylene glycol) to F344/N rats and B6C3F1 mice of each sex at concentrations ranging from 0.1x to 100x, where 1x represented the median concentrations of the individual chemicals found in studies of groundwater contamination from normal agricultural activities. This report focuses primarily on 26-week toxicity studies describing histopathology, clinical pathology, neurobehavior/neuropathology, and reproductive system effects. The genetic toxicity of the mixtures was assessed by determining the frequency of micronuclei in peripheral blood of mice and evaluating micronuclei and sister chromatid exchanges in splenocytes from female mice and male rats. Additional studies with these mixtures that are briefly reviewed in this report include teratology studies with Sprague-Dawley rats and continuous breeding studies with CD-1 Swiss mice. In 26-week drinking water studies of the California and the Iowa mixtures, all rats (10 per sex and group) survived to the end of the studies, and there were no significant effects on body weight gains. Water consumption was not affected by the pesticide/fertilizer contaminants, and there were no clinical signs of toxicity or neurobehavioral effects as measured by a functional observational battery, motor activity evaluations, thermal sensitivity evaluations, and startle response. There were no clear adverse effects noted in clinical pathology (including serum cholinesterase activity), organ weight, reproductive system, or histopathologic evaluations, although absolute and relative liver weights were marginally increased with increasing exposure concentration in both male and female rats consuming the Iowa mixture. In 26-week drinking water studies in mice, one male receiving the California mixture at 100x died during the study, and one control female and one female in the 100x group in the Iowa mixture study also died early. It could not be determined if the death of either of the mice in the 100x groups was related to consumption of the pesticide/fertilizer mixtures. Water consumption and body weight gains were not affected in these studies, and no signs of toxicity were noted in clinical observations or in neurobehavioral assessments. No clear adverse effects were noted in clinical pathology, reproductive system, organ weight, or histopathologic evaluations of exposed mice. The pesticide/fertilizer mixtures, when tested over a concentration range similar to that used in the 26-week studies, were found to have no effects in teratology studies or in a continuous breeding assay examining reproductive and developmental toxicity. The California and Iowa pesticide mixtures were tested for induction of micronuclei in peripheral blood erythrocytes of female mice. Results of tests with the California mixture were negative. Significant increases in micronucleated normochromatic erythrocytes were seen at the two-highest concentrations (10x and 100x) of the Iowa mixture, but the increases were within the normal range of micronuclei in historical control animals. Splenocytes of male rats and female mice exposed to these mixtures were examined for micronucleus and sister chromatid exchange frequencies. Sister chromatid exchange frequencies were marginally increased in rats and mice receiving the California mixture, but neither species exhibited increased frequencies of micronucleated splenocytes. None of these changes were considered to have biological importance. In summary, studies of potential toxicity associated with the consumption of mixtures of pesticides and a fertilizer representative of groundwater contamination in agriculturative of groundwater contamination in agricultural areas of Iowa and California failed to demonstrate any significant adverse effects in rats or mice receiving the mixtures in drinking water at concentrations as high as 100 times the median concentrations of the individual chemicals determined by groundwater surveys. NOTE: These studies were supported in part by funds from the Comprehensive Environmental Response, Compensation, and Liability Act trust fund (Superfund) by an interagency agreement with the Agency for Toxic Substances and Disease Registry, U.S. Public Health Service.
Healing effect of sea buckthorn, olive oil, and their mixture on full-thickness burn wounds.
Edraki, Mitra; Akbarzadeh, Armin; Hosseinzadeh, Massood; Tanideh, Nader; Salehi, Alireza; Koohi-Hosseinabadi, Omid
2014-07-01
The purpose of this study is to evaluate the healing effect of silver sulfadiazine (SSD), sea buckthorn, olive oil, and 5% sea buckthorn and olive oil mixture on full-thickness burn wounds with respect to both gross and histopathologic features. Full-thickness burns were induced on 60 rats; the rats were then were divided into 5 groups and treated with sea buckthorn, olive oil, a 5% sea buckthorn/olive oil mixture, SSD, and normal saline (control). They were observed for 28 days, and the wounds' healing process was evaluated. Wound contraction occurred faster in sea buckthorn, olive oil, and the sea buckthorn/olive oil mixture groups compared with the SSD and control groups. The volume of the exudates was controlled more effectively in wounds treated with the sea buckthorn/olive oil mixture. Purulent exudates were observed in the control group, but the others did not show infection. The group treated with sea buckthorn/olive oil mixture revealed more developed re-epithelialization with continuous basement membrane with a mature granulation tissue, whereas the SSD-treated group showed ulceration, necrosis, and immature granulation. The results show that sea buckthorn and olive oil individually are proper dressing for burn wounds and that they also show a synergetic effect when they are used together. A sea buckthorn and olive oil mixture could be considered as an alternative dressing for full-thickness burns because of improved wound healing characteristics and antibacterial property.
A Cyber-Attack Detection Model Based on Multivariate Analyses
NASA Astrophysics Data System (ADS)
Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi
In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.
NASA Astrophysics Data System (ADS)
Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan
2018-05-01
Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.
NASA Astrophysics Data System (ADS)
Dong, Yijun
The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.
DOT National Transportation Integrated Search
2012-11-01
When a bridge engineer encounters a design or analysis problem concerning a bridge substructure, that structure will commonly have a mixture of member types, some slender, and some squat. Slender members are generally governed by flexure, and normal ...
Variable Screening for Cluster Analysis.
ERIC Educational Resources Information Center
Donoghue, John R.
Inclusion of irrelevant variables in a cluster analysis adversely affects subgroup recovery. This paper examines using moment-based statistics to screen variables; only variables that pass the screening are then used in clustering. Normal mixtures are analytically shown often to possess negative kurtosis. Two related measures, "m" and…
Engström, Wilhelm; Darbre, Philippa; Eriksson, Staffan; Gulliver, Linda; Hultman, Tove; Karamouzis, Michalis V; Klaunig, James E; Mehta, Rekha; Moorwood, Kim; Sanderson, Thomas; Sone, Hideko; Vadgama, Pankaj; Wagemaker, Gerard; Ward, Andrew; Singh, Neetu; Al-Mulla, Fahd; Al-Temaimi, Rabeah; Amedei, Amedeo; Colacci, Anna Maria; Vaccari, Monica; Mondello, Chiara; Scovassi, A Ivana; Raju, Jayadev; Hamid, Roslida A; Memeo, Lorenzo; Forte, Stefano; Roy, Rabindra; Woodrick, Jordan; Salem, Hosni K; Ryan, Elizabeth P; Brown, Dustin G; Bisson, William H
2015-06-01
The aim of this work is to review current knowledge relating the established cancer hallmark, sustained cell proliferation to the existence of chemicals present as low dose mixtures in the environment. Normal cell proliferation is under tight control, i.e. cells respond to a signal to proliferate, and although most cells continue to proliferate into adult life, the multiplication ceases once the stimulatory signal disappears or if the cells are exposed to growth inhibitory signals. Under such circumstances, normal cells remain quiescent until they are stimulated to resume further proliferation. In contrast, tumour cells are unable to halt proliferation, either when subjected to growth inhibitory signals or in the absence of growth stimulatory signals. Environmental chemicals with carcinogenic potential may cause sustained cell proliferation by interfering with some cell proliferation control mechanisms committing cells to an indefinite proliferative span. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chia, Jean-San; Du, Jia-Ling; Wu, Ming-Shiou; Hsu, Wei-Bin; Chiang, Chun-Pin; Sun, Andy; Lu, John Jenn-Yenn; Wang, Won-Bo
2013-05-01
Previous studies have shown that soybean fermentation products can act as cancer chemoprevention or therapeutic agents. In this study, the anticancer activities of a fermentation product of soybean, black bean, and green bean mixture (BN999) were investigated. We found that BN999 inhibited the growth of human breast cancer AU565 cells and prostate adenocarcinoma PC-3 cells but not that of normal human cells. BN999 induced apoptosis in various human cancer cells but not in normal human cells. BN999 treatment of AU565 cancer cells resulted in activation of calpain and caspase-8, -9, and -3, suggesting that BN999 induces apoptosis via receptor-, mitochondria-, and endoplasmic reticulum-mediated pathways. Finally, we showed that BN999 inhibited the growth of mouse CT-26 colon cancer xenografts in syngenic BALB/c mice without causing obvious side effects. Together, these data suggest that BN999 has potential to be used as a cancer chemoprevention or therapeutic agent.
Rett syndrome: stimulation of endogenous biogenic amines.
Pelligra, R; Norton, R D; Wilkinson, R; Leon, H A; Matson, W R
1992-06-01
Transient hypercapnic hyperoxemia was induced in two Rett syndrome children by the administration of a gaseous mixture of 80% O2 and 20% CO2. Time course studies of neurotransmitters and their metabolites showed an immediate and marked increase in central biogenic amine turnover following inhalation of the gas mixture. The increased turnover of biogenic amines was associated with improved clinical changes. This suggests a coupled relationship and provides further support for an etiological role of neurotransmitter dysfunction in Rett syndrome. In a complementary study, elevation of pulmonary CO2 by application of a simple rebreathing device resulted in improvement of abnormal blood gases and elimination of the Cheyne-Stokes-like respiratory pattern of the Rett syndrome. Near normalization of the EEG occurred when a normal respiratory pattern was imposed by means of a respirator. Taken together, these results lead to the preliminary conclusion that cerebral hypoxemia secondary to abnormal respiratory function may contribute to diminished production of biogenic amines in Rett syndrome.
Rett syndrome - Stimulation of endogenous biogenic amines
NASA Technical Reports Server (NTRS)
Pelligra, R.; Norton, R. D.; Wilkinson, R.; Leon, H. A.; Matson, W. R.
1992-01-01
Transient hypercapnic hyperoxemia was induced in two Rett syndrome children by the administration of a gaseous mixture of 80 percent O2 and 20 percent CO2. Time course studies of neurotransmitters and their metabolites showed an immediate and marked increase in central biogenic amine turnover following inhalation of the gas mixture. The increased turnover of biogenic amines was associated with improved clinical changes. This suggests a coupled relationship and provides further support for an etiological role of neurotransmitter dysfunction in Rett syndrome. In a complementary study, elevation of pulmonary CO2 by application of a simple rebreathing device resulted in improvement of abnormal blood gases and elimination of the Cheyne-Stokes-like respiratory pattern of the Rett syndrome. Near normalization of the EEG occurred when a normal respiratory pattern was imposed by means of a respirator. Taken together, these results lead to the preliminary conclusion that cerebral hypoxemia secondary to abnormal respiratory function may contribute to diminished production of biogenic amines in Rett syndrome.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
Polynomial compensation, inversion, and approximation of discrete time linear systems
NASA Technical Reports Server (NTRS)
Baram, Yoram
1987-01-01
The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
Varas, Lautaro R; Pontes, F C; Santos, A C F; Coutinho, L H; de Souza, G G B
2015-09-15
The ion-ion-coincidence mass spectroscopy technique brings useful information about the fragmentation dynamics of doubly and multiply charged ionic species. We advocate the use of a matrix-parameter methodology in order to represent and interpret the entire ion-ion spectra associated with the ionic dissociation of doubly charged molecules. This method makes it possible, among other things, to infer fragmentation processes and to extract information about overlapped ion-ion coincidences. This important piece of information is difficult to obtain from other previously described methodologies. A Wiley-McLaren time-of-flight mass spectrometer was used to discriminate the positively charged fragment ions resulting from the sample ionization by a pulsed 800 eV electron beam. We exemplify the application of this methodology by analyzing the fragmentation and ionic dissociation of the dimethyl disulfide (DMDS) molecule as induced by fast electrons. The doubly charged dissociation was analyzed using the Multivariate Normal Distribution. The ion-ion spectrum of the DMDS molecule was obtained at an incident electron energy of 800 eV and was matrix represented using the Multivariate Distribution theory. The proposed methodology allows us to distinguish information among [CH n SH n ] + /[CH 3 ] + (n = 1-3) fragment ions in the ion-ion coincidence spectra using ion-ion coincidence data. Using the momenta balance methodology for the inferred parameters, a secondary decay mechanism is proposed for the [CHS] + ion formation. As an additional check on the methodology, previously published data on the SiF 4 molecule was re-analyzed with the present methodology and the results were shown to be statistically equivalent. The use of a Multivariate Normal Distribution allows for the representation of the whole ion-ion mass spectrum of doubly or multiply ionized molecules as a combination of parameters and the extraction of information among overlapped data. We have successfully applied this methodology to the analysis of the fragmentation of the DMDS molecule. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
On the cause of the non-Gaussian distribution of residuals in geomagnetism
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2017-12-01
To describe errors in the data, Gaussian distributions naturally come to mind. In many practical instances, indeed, Gaussian distributions are appropriate. In the broad field of geomagnetism, however, it has repeatedly been noted that residuals between data and models often display much sharper distributions, sometimes better described by a Laplace distribution. In the present study, we make the case that such non-Gaussian behaviors are very likely the result of what is known as mixture of distributions in the statistical literature. Mixtures arise as soon as the data do not follow a common distribution or are not properly normalized, the resulting global distribution being a mix of the various distributions followed by subsets of the data, or even individual datum. We provide examples of the way such mixtures can lead to distributions that are much sharper than Gaussian distributions and discuss the reasons why such mixtures are likely the cause of the non-Gaussian distributions observed in geomagnetism. We also show that when properly selecting sub-datasets based on geophysical criteria, statistical mixture can sometimes be avoided and much more Gaussian behaviors recovered. We conclude with some general recommendations and point out that although statistical mixture always tends to sharpen the resulting distribution, it does not necessarily lead to a Laplacian distribution. This needs to be taken into account when dealing with such non-Gaussian distributions.
NASA Astrophysics Data System (ADS)
Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.
2016-01-01
We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.
Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Oh, Kyoungmin; Yoo, Hyeonchae; Ham, Hyeonheui; Kim, Moon S.
2017-01-01
The purpose of this study is to use near-infrared reflectance (NIR) spectroscopy equipment to nondestructively and rapidly discriminate Fusarium-infected hulled barley. Both normal hulled barley and Fusarium-infected hulled barley were scanned by using a NIR spectrometer with a wavelength range of 1175 to 2170 nm. Multiple mathematical pretreatments were applied to the reflectance spectra obtained for Fusarium discrimination and the multivariate analysis method of partial least squares discriminant analysis (PLS-DA) was used for discriminant prediction. The PLS-DA prediction model developed by applying the second-order derivative pretreatment to the reflectance spectra obtained from the side of hulled barley without crease achieved 100% accuracy in discriminating the normal hulled barley and the Fusarium-infected hulled barley. These results demonstrated the feasibility of rapid discrimination of the Fusarium-infected hulled barley by combining multivariate analysis with the NIR spectroscopic technique, which is utilized as a nondestructive detection method. PMID:28974012
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Shaohua; Wang, Lan; Chen, Weisheng; Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Li, Buhong; Chen, Rong
2014-11-01
Non-invasive esophagus cancer detection based on urine surface-enhanced Raman spectroscopy (SERS) analysis was presented. Urine SERS spectra were measured on esophagus cancer patients (n = 56) and healthy volunteers (n = 36) for control analysis. Tentative assignments of the urine SERS spectra indicated some interesting esophagus cancer-specific biomolecular changes, including a decrease in the relative content of urea and an increase in the percentage of uric acid in the urine of esophagus cancer patients compared to that of healthy subjects. Principal component analysis (PCA) combined with linear discriminant analysis (LDA) was employed to analyze and differentiate the SERS spectra between normal and esophagus cancer urine. The diagnostic algorithms utilizing a multivariate analysis method achieved a diagnostic sensitivity of 89.3% and specificity of 83.3% for separating esophagus cancer samples from normal urine samples. These results from the explorative work suggested that silver nano particle-based urine SERS analysis coupled with PCA-LDA multivariate analysis has potential for non-invasive detection of esophagus cancer.
Mixture distributions of wind speed in the UAE
NASA Astrophysics Data System (ADS)
Shin, J.; Ouarda, T.; Lee, T. S.
2013-12-01
Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Hwang, Soo-Duck; Xing, Feng
2014-01-01
With the extensive use of self-consolidating concrete (SCC) worldwide, it is important to ensure that such concrete can secure uniform in-situ mechanical properties that are similar to those obtained with properly consolidated concrete of conventional fluidity. Ensuring proper stability of SCC is essential to enhance the uniformity of in-situ mechanical properties, including bond to embedded reinforcement, which is critical for structural engineers considering the specification of SCC for prestressed applications. In this investigation, Six wall elements measuring 1540 mm × 2150 mm × 200 mm were cast using five SCC mixtures and one reference high-performance concrete (HPC) of normal consistency to evaluate the uniformity of bond strength between prestressing strands and concrete as well as the distribution of compressive strength obtained from cores along wall elements. The evaluated SCC mixtures used for casting wall elements were proportioned to achieve a slump flow consistency of 680 ± 15 mm and minimum caisson filling capacity of 80%, and visual stability index of 0.5 to 1. Given the spreads in viscosity and static stability of the SCC mixtures, the five wall elements exhibited different levels of homogeneity in in-situ compressive strength and pull-out bond strength. Test results also indicate that despite the high fluidity of SCC, stable concrete can lead to more homogenous in-situ properties than HPC of normal consistency subjected to mechanical vibration. PMID:28788223
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Hwang, Soo-Duck; Xing, Feng
2014-10-10
With the extensive use of self-consolidating concrete (SCC) worldwide, it is important to ensure that such concrete can secure uniform in-situ mechanical properties that are similar to those obtained with properly consolidated concrete of conventional fluidity. Ensuring proper stability of SCC is essential to enhance the uniformity of in-situ mechanical properties, including bond to embedded reinforcement, which is critical for structural engineers considering the specification of SCC for prestressed applications. In this investigation, Six wall elements measuring 1540 mm × 2150 mm × 200 mm were cast using five SCC mixtures and one reference high-performance concrete (HPC) of normal consistency to evaluate the uniformity of bond strength between prestressing strands and concrete as well as the distribution of compressive strength obtained from cores along wall elements. The evaluated SCC mixtures used for casting wall elements were proportioned to achieve a slump flow consistency of 680 ± 15 mm and minimum caisson filling capacity of 80%, and visual stability index of 0.5 to 1. Given the spreads in viscosity and static stability of the SCC mixtures, the five wall elements exhibited different levels of homogeneity in in-situ compressive strength and pull-out bond strength. Test results also indicate that despite the high fluidity of SCC, stable concrete can lead to more homogenous in-situ properties than HPC of normal consistency subjected to mechanical vibration.
Various chemicals in the environment can disrupt normal endocrine function, including steroid hormone synthesis, causing deleterious effects. Because these compounds can act at different levels of the hypothalamus-pituitary-gonadal (HPG) axis, their effects can lead to a mixture...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiaoyan Tang; Min Shao; Yuanhang Zhang
1996-12-31
Ambient aerosol is one of most important pollutants in China. This paper showed the results of aerosol sources of Beijing area revealed by combination of multivariate analysis models and 14C tracer measured on Accelerator Mass Spectrometry (AMS). The results indicated that the mass concentration of particulate (<100 (M)) didn`t increase rapidly, compared with economic development in Beijing city. The multivariate analysis showed that the predominant source was soil dust which contributed more than 50% to atmospheric particles. However, it would be a risk to conclude that the aerosol pollution from anthropogenic sources was less important in Beijing city based onmore » above phenomenon. Due to lack of reliable tracers, it was very hard to distinguish coal burning from soil source. Thus, it was suspected that the soil source above might be the mixture of soil dust and coal burning. The 14C measurement showed that carbonaceous species of aerosol had quite different emission sources. For carbonaceous aerosols in Beijing, the contribution from fossil fuel to ambient particles was nearly 2/3, as the man-made activities ( coal-burning, etc.) increased, the fossil part would contribute more to atmospheric carbonaceous particles. For example, in downtown Beijing at space-heating seasons, the fossil fuel even contributed more than 95% to carbonaceous particles, which would be potential harmful to population. By using multivariate analysis together with 14C data, two important sources of aerosols in Beijing (soil and coal) combustion were more reliably distinguished, which was critical important for the assessment of aerosol problem in China.« less
Jåstad, Eirik O; Torheim, Turid; Villeneuve, Kathleen M; Kvaal, Knut; Hole, Eli O; Sagstuen, Einar; Malinen, Eirik; Futsaether, Cecilia M
2017-09-28
The amino acid l-α-alanine is the most commonly used material for solid-state electron paramagnetic resonance (EPR) dosimetry, due to the formation of highly stable radicals upon irradiation, with yields proportional to the radiation dose. Two major alanine radical components designated R1 and R2 have previously been uniquely characterized from EPR and electron-nuclear double resonance (ENDOR) studies as well as from quantum chemical calculations. There is also convincing experimental evidence of a third minor radical component R3, and a tentative radical structure has been suggested, even though no well-defined spectral signature has been observed experimentally. In the present study, temperature dependent EPR spectra of X-ray irradiated polycrystalline alanine were analyzed using five multivariate methods in further attempts to understand the composite nature of the alanine dosimeter EPR spectrum. Principal component analysis (PCA), maximum likelihood common factor analysis (MLCFA), independent component analysis (ICA), self-modeling mixture analysis (SMA), and multivariate curve resolution (MCR) were used to extract pure radical spectra and their fractional contributions from the experimental EPR spectra. All methods yielded spectral estimates resembling the established R1 spectrum. Furthermore, SMA and MCR consistently predicted both the established R2 spectrum and the shape of the R3 spectrum. The predicted shape of the R3 spectrum corresponded well with the proposed tentative spectrum derived from spectrum simulations. Thus, results from two independent multivariate data analysis techniques strongly support the previous evidence that three radicals are indeed present in irradiated alanine samples.
Nearest neighbors by neighborhood counting.
Wang, Hui
2006-06-01
Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it will work for other types of data.
Internal structure of shock waves in disparate mass mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
The detailed flow structure of a normal shock wave for a gas mixture is investigated using the direct-simulation Monte Carlo method. A variable diameter hard-sphere (VDHS) model is employed to investigate the effect of different viscosity temperature exponents (VTE) for each species in a gas mixture. Special attention is paid to the irregular behavior in the density profiles which was previously observed in a helium-xenon experiment. It is shown that the VTE can have substantial effects in the prediction of the structure of shock waves. The variable hard-sphere model of Bird shows good agreement, but with some limitations, with the experimental data if a common VTE is chosen properly for each case. The VDHS model shows better agreement with the experimental data without adjusting the VTE. The irregular behavior of the light-gas component in shock waves of disparate mass mixtures is observed not only in the density profile, but also in the parallel temperature profile. The strength of the shock wave, the type of molecular interactions, and the mole fraction of heavy species have substantial effects on the existence and structure of the irregularities.
NASA Technical Reports Server (NTRS)
Sunderland, P. B.; Urban, D. L.; Stocker, D. P.; Chao, B.-H.; Axelbaum, Richard L.; Salzman, Jack (Technical Monitor)
2001-01-01
Limiting conditions for soot-particle inception were studied in microgravity spherical diffusion flames burning ethylene at atmospheric pressure. Nitrogen was supplied in the fuel and/or oxidizer to obtain the broadest range of stoichiometric mixture fraction. Both normal flames (oxygen in ambience) and inverted flames (fuel in ambience) were considered. Microgravity was obtained in the NASA Glenn 2.2-second drop tower. The flames were observed with a color video camera and sooting conditions were defined as conditions for which yellow emission was present throughout the duration of the drop. Sooting limit results were successfully correlated in terms of adiabatic flame temperature and stoichiometric mixture fraction. Soot free conditions were favored by increased stoichiometric mixture fractions. No statistically significant effect of convection direction on sooting limits was observed. The relationship between adiabatic flame temperature and stoichiometric mixture fraction at the sooting limits was found to be in qualitative agreement with a simple theory based on the assumption that soot inception can occur only where temperature and local C/O ratio exceed threshold values (circa 1250 K and 1, respectively).
NASA Astrophysics Data System (ADS)
Ashraf, P. Muhamed; Anuradha, R.
2018-02-01
BIS 2062-grade carbon steel is extensively used for fishing boat construction. The steel is highly susceptible to corrosion on the hull and welding joints under marine environment. Here, we demonstrate the application of a novel multifunctional nano-metal-oxide mixture comprised of iron, titanium, and cerium as a marine coating to prevent corrosion. The electrochemical performance of nano-metal-oxide mixture coatings, applied over boat-building steel, was evaluated at 3.5% NaCl medium. The nano-mixture surface coatings showed an efficient corrosion resistance with increased polarization resistance of 6043 Ω cm2 and low corrosion current density of 3.53 × 10-6 A cm-2. The electrochemical impedance spectral data exhibited improvement in the polarization resistance of outermost surface and internal layers. The coating responded faster recovery to normal state when subjected to an induced stress over the coating. The nano-material in the coating behaves as a semiconductor; this enhanced electronic activity over the surface of the steel.
Ju, Daeyoung; Young, Thomas M.; Ginn, Timothy R.
2012-01-01
An innovative method is proposed for approximation of the set of radial diffusion equations governing mass exchange between aqueous bulk phase and intra-particle phase for a hetero-disperse mixture of particles such as occur in suspension in surface water, in riverine/estuarine sediment beds, in soils and in aquifer materials. For this purpose the temporal variation of concentration at several uniformly distributed points within a normalized representative particle with spherical, cylindrical or planar shape is fitted with a 2-domain linear reversible mass exchange model. The approximation method is then superposed in order to generalize the model to a hetero-disperse mixture of particles. The method can reduce the computational effort needed in solving the intra-particle mass exchange of a hetero-disperse mixture of particles significantly and also the error due to the approximation is shown to be relatively small. The method is applied to describe desorption batch experiment of 1,2-Dichlorobenzene from four different soils with known particle size distributions and it could produce good agreement with experimental data. PMID:18304692
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
The effects of binary UV filter mixtures on the midge Chironomus riparius.
Ozáez, Irene; Morcillo, Gloria; Martínez-Guitarte, José-Luis
2016-06-15
Organic ultraviolet (UV) filters are used in a wide variety of products, including cosmetics, to prevent damage from UV light in tissues and industrial materials. Their extensive use has raised concerns about potential adverse effects in human health and aquatic ecosystems that accumulate these pollutants. To increase sun radiation protection, UV filters are commonly used in mixtures. Here, we studied the toxicity of binary mixtures of 4-methylbenzylidene camphor (4MBC), octyl-methoxycinnamate (OMC), and benzophenone-3 (BP-3), by evaluating the larval mortality of Chironomus riparius. Also molecular endpoints have been analyzed, including alterations in the expression levels of a gene related with the endocrine system (EcR, ecdysone receptor) and a gene related with the stress response (hsp70, heat shock protein 70). The results showed that the mortality caused by binary mixtures was similar to that observed for each compound alone; however, some differences in LC50 were observed between groups. Gene expression analysis showed that EcR mRNA levels increased in the presence of 0.1mg/L 4MBC but returned to normal levels after exposure to mixtures of 4MBC with 0.1, 1, and 10mg/L of BP-3 or OMC. In contrast, the hsp70 mRNA levels increased after exposure to the combinations tested of 4MBC and BP-3 or OMC mixtures. These data suggest that 4MBC, BP-3, and OMC may have antagonist effects on EcR gene transcription and a synergistic effect on hsp70 gene activation. This is the first experimental study to show the complex patterned effects of UV filter mixtures on invertebrates. The data suggest that the interactions within these chemicals mixtures are complex and show diverse effects on various endpoints. Copyright © 2016 Elsevier B.V. All rights reserved.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Boberg, Julie; Johansson, Hanna K L; Hadrup, Niels; Dreisig, Karin; Berthelsen, Line; Almstrup, Kristian; Vinggaard, Anne Marie; Hass, Ulla
2015-02-01
Elevated levels of endogenous or exogenous estrogens during fetal life can induce permanent disturbances in prostate growth and predispose to precancerous lesions. Recent studies have indicated that also early anti-androgen exposure may affect prostate cancer risk. We examined the influence of perinatal exposure to mixtures of anti-androgenic and estrogenic chemicals on prostate development. Wistar rats were exposed from gestation day 7 to postnatal day 22 to a mixture of 8 anti-androgenic compounds (AAMix), a mixture of four estrogenic compounds (EMix), or paracetamol or a mixture of all 13 compounds (TotalMix) in mixture ratios reflecting human exposure levels. Ventral prostate weights were reduced by the TotalMix and AAMix in pre-pubertal rats. Histological changes in prostate appeared with increasing age and indicated a shift from the normal age-dependent epithelial atrophy towards hyperplasia. These lesions showed similarities to pre-cancerous lesions in humans. Increased proliferation was observed already in pre-puberty and it was hypothesized that this could be associated with reduced ERβ signaling, but no clear conclusions could be made from gene expression studies on ERβ-related pathways. The influences of the estrogenic chemicals and paracetamol on prostate morphology were minor, but in young adulthood the estrogen mixture reduced ventral prostate mRNA levels of Igf1 and paracetamol reduced the mRNA level ofPbpc3. Mixtures of endocrine disrupters relevant for human exposure was found to elicit persistent effects on the rat prostate following perinatal exposure, suggesting that human perinatal exposure to environmental chemicals may increase the risk of prostate cancer later in life. © 2014 Wiley Periodicals, Inc.
Novel microfluidic device for the continuous separation of cancer cells using dielectrophoresis.
Alazzam, Anas; Mathew, Bobby; Alhammadi, Falah
2017-03-01
We describe the design, microfabrication, and testing of a microfluidic device for the separation of cancer cells based on dielectrophoresis. Cancer cells, specifically green fluorescent protein-labeled MDA-MB-231, are successfully separated from a heterogeneous mixture of the same and normal blood cells. MDA-MB-231 cancer cells are separated with an accuracy that enables precise detection and counting of circulating tumor cells present among normal blood cells. The separation is performed using a set of planar interdigitated transducer electrodes that are deposited on the surface of a glass wafer and slightly protrude into the separation microchannel at one side. The device includes two parts, namely, a glass wafer and polydimethylsiloxane element. The device is fabricated using standard microfabrication techniques. All experiments are conducted with low conductivity sucrose-dextrose isotonic medium. The variation in response between MDA-MB-231 cancer cells and normal cells to a certain band of alternating-current frequencies is used for continuous separation of cells. The fabrication of the microfluidic device, preparation of cells and medium, and flow conditions are detailed. The proposed microdevice can be used to detect and separate malignant cells from heterogeneous mixture of cells for the purpose of early screening for cancer. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.
Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W
2018-01-01
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Abd El-Rahman, Mohamed K.
2015-03-01
Normalized spectra have a great power in resolving spectral overlap of challenging Orphenadrine (ORP) and Paracetamol (PAR) binary mixture, four smart techniques utilizing the normalized spectra were used in this work, namely, amplitude modulation (AM), simultaneous area ratio subtraction (SARS), simultaneous derivative spectrophotometry (S1DD) and ratio H-point standard addition method (RHPSAM). In AM, peak amplitude at 221.6 nm of the division spectra was measured for both ORP and PAR determination, while in SARS, concentration of ORP was determined using the area under the curve from 215 nm to 222 nm of the regenerated ORP zero order absorption spectra, in S1DD, concentration of ORP was determined using the peak amplitude at 224 nm of the first derivative ratio spectra. PAR concentration was determined directly at 288 nm in the division spectra obtained during the manipulation steps in the previous three methods. The last RHPSAM is a dual wavelength method in which two calibrations were plotted at 216 nm and 226 nm. RH point is the intersection of the two calibration lines, where ORP and PAR concentrations were directly determined from coordinates of RH point. The proposed methods were applied successfully for the determination of ORP and PAR in their dosage form.
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
Estimating the Classification Efficiency of a Test Battery.
ERIC Educational Resources Information Center
De Corte, Wilfried
2000-01-01
Shows how a theorem proven by H. Brogden (1951, 1959) can be used to estimate the allocation average (a predictor based classification of a test battery) assuming that the predictor intercorrelations and validities are known and that the predictor variables have a joint multivariate normal distribution. (SLD)
Sliding-surface-liquefaction of sand-dry ice mixture and submarine landslides
NASA Astrophysics Data System (ADS)
Fukuoka, H.; Tsukui, A.
2010-12-01
In the historic records of off-shore mega-earthquakes along the subduction zone offshore Japan, there are a lot of witnesses about large-scale burning of flammable gas possibly ejected from sea floor. This gas was supposed to be the dissolved methane hydrates (MH), which have been found in the soundings of IODP and other oceanology projects. Since the vast distribution of the BSR in the continental margins, a lot of papers have been published which pointed out the possibilities of that gasification of those hydrates could have triggered gigantic submarine landslides. Global warming or large earthquake or magma intrusion may trigger extremely deep gigantic landslides in continental margins that which could cause catastrophic tsunami. However, recent triaxial compression tests on artificially prepared sand-MH-mixture samples revealed that the they have slightly higher strength than the ones of only sands and MH’s endothermal characteristics may resist against accelerating shear and large-displacement landslides as well. While, the stress-controlled undrained ring shear apparatuses have been developed by Sassa and Fukuoka at Disaster Prevention Research Institute, Kyoto University to reproduce subaerial landslides induced by earthquakes and rainfalls. Using the apparatuses, they found localized liquefaction phenomenon along the deep saturated potential sliding surface due to excess pore pressure generation during the grain crushing induced bulk volume change. This phenomenon was named as “sliding surface liquefaction.” Similar sudden large pore pressure generation was observed in pore pressure control test simulating rain-induced landslides. In this paper, authors examined the shear behavior of the dry sand-dry ice mixture under constant normal stress and shear speed control tests using the latest ring shear apparatus. Sample was mixture of silica sands and dry-ice pellets (frozen carbon-dioxide). Those mixtures are often used for studying the mechanism of the methane hydrates in laboratories because no explosion protection facility is required. In order to prevent rapid gasification, the specimen was prepared without water. Applied total normal stress was 200 kPa and initial normal stress was maintained at about 70 kPa by slightly opening the drainage valve to vent pressured CO2 gas. When the sample was sheared at 30 cm/s, the stress path reached failure line of friction angle of about 37 degrees immediately. However, excess pore air pressure increased soon after and the stress path moved to the origin along the failure line. This means rapid shearing generates frictional heat and it accelerates the gasification of dry ice quickly. On the other hand, crushing of pellets may contribute to increase the total surface area of dry ice and to acceleration of gasification, to some extent. Authors are conducting to examine the velocity weakening characteristics of the samples and upcoming results will give more detail of the mechanism. But this sliding-surface-liquefaction in the mixture supports the possibility of similar accelerating displacement in the sand-MH mixture or boundaries between MH and sand layer induced by certain strong ground motion under sea floor.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
NASA Astrophysics Data System (ADS)
Wu, Liang; Malijevský, Alexandr; Avendaño, Carlos; Müller, Erich A.; Jackson, George
2018-04-01
A molecular simulation study of binary mixtures of hard spherocylinders (HSCs) and hard spheres (HSs) confined between two structureless hard walls is presented. The principal aim of the work is to understand the effect of the presence of hard spheres on the entropically driven surface nematization of hard rod-like particles at surfaces. The mixtures are studied using a constant normal-pressure Monte Carlo algorithm. The surface adsorption at different compositions is examined in detail. At moderate hard-sphere concentrations, preferential adsorption of the spheres at the wall is found. However, at moderate to high pressure (density), we observe a crossover in the adsorption behavior with nematic layers of the rods forming at the walls leading to local demixing of the system. The presence of the spherical particles is seen to destabilize the surface nematization of the rods, and the degree of demixing increases on increasing the hard-sphere concentration.
Modeling of active transmembrane transport in a mixture theory framework.
Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T
2010-05-01
This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.
NASA Astrophysics Data System (ADS)
Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling
2012-10-01
This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.
Wu, Liang; Malijevský, Alexandr; Avendaño, Carlos; Müller, Erich A; Jackson, George
2018-04-28
A molecular simulation study of binary mixtures of hard spherocylinders (HSCs) and hard spheres (HSs) confined between two structureless hard walls is presented. The principal aim of the work is to understand the effect of the presence of hard spheres on the entropically driven surface nematization of hard rod-like particles at surfaces. The mixtures are studied using a constant normal-pressure Monte Carlo algorithm. The surface adsorption at different compositions is examined in detail. At moderate hard-sphere concentrations, preferential adsorption of the spheres at the wall is found. However, at moderate to high pressure (density), we observe a crossover in the adsorption behavior with nematic layers of the rods forming at the walls leading to local demixing of the system. The presence of the spherical particles is seen to destabilize the surface nematization of the rods, and the degree of demixing increases on increasing the hard-sphere concentration.
Lee, Byeong-Ju; Kim, Hye-Youn; Lim, Sa Rang; Huang, Linfang; Choi, Hyung-Kyoon
2017-01-01
Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values.
Lim, Sa Rang; Huang, Linfang
2017-01-01
Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values. PMID:29049369
Yan, Binjun; Fang, Zhonghua; Shen, Lijuan; Qu, Haibin
2015-01-01
The batch-to-batch quality consistency of herbal drugs has always been an important issue. To propose a methodology for batch-to-batch quality control based on HPLC-MS fingerprints and process knowledgebase. The extraction process of Compound E-jiao Oral Liquid was taken as a case study. After establishing the HPLC-MS fingerprint analysis method, the fingerprints of the extract solutions produced under normal and abnormal operation conditions were obtained. Multivariate statistical models were built for fault detection and a discriminant analysis model was built using the probabilistic discriminant partial-least-squares method for fault diagnosis. Based on multivariate statistical analysis, process knowledge was acquired and the cause-effect relationship between process deviations and quality defects was revealed. The quality defects were detected successfully by multivariate statistical control charts and the type of process deviations were diagnosed correctly by discriminant analysis. This work has demonstrated the benefits of combining HPLC-MS fingerprints, process knowledge and multivariate analysis for the quality control of herbal drugs. Copyright © 2015 John Wiley & Sons, Ltd.
The association between body mass index and severe biliary infections: a multivariate analysis.
Stewart, Lygia; Griffiss, J McLeod; Jarvis, Gary A; Way, Lawrence W
2012-11-01
Obesity has been associated with worse infectious disease outcomes. It is a risk factor for cholesterol gallstones, but little is known about associations between body mass index (BMI) and biliary infections. We studied this using factors associated with biliary infections. A total of 427 patients with gallstones were studied. Gallstones, bile, and blood (as applicable) were cultured. Illness severity was classified as follows: none (no infection or inflammation), systemic inflammatory response syndrome (fever, leukocytosis), severe (abscess, cholangitis, empyema), or multi-organ dysfunction syndrome (bacteremia, hypotension, organ failure). Associations between BMI and biliary bacteria, bacteremia, gallstone type, and illness severity were examined using bivariate and multivariate analysis. BMI inversely correlated with pigment stones, biliary bacteria, bacteremia, and increased illness severity on bivariate and multivariate analysis. Obesity correlated with less severe biliary infections. BMI inversely correlated with pigment stones and biliary bacteria; multivariate analysis showed an independent correlation between lower BMI and illness severity. Most patients with severe biliary infections had a normal BMI, suggesting that obesity may be protective in biliary infections. This study examined the correlation between BMI and biliary infection severity. Published by Elsevier Inc.
Does tip-of-the-tongue for proper names discriminate amnestic mild cognitive impairment?
Juncos-Rabadán, Onésimo; Facal, David; Lojo-Seoane, Cristina; Pereiro, Arturo X
2013-04-01
Difficulty in retrieving people's names is very common in the early stages of Alzheimer's disease and mild cognitive impairment. Such difficulty is often observed as the tip-of-the-tongue (TOT) phenomenon. The main aim of this study was to explore whether a famous people's naming task that elicited the TOT state can be used to discriminate between amnestic mild cognitive impairment (aMCI) patients and normal controls. Eighty-four patients with aMCI and 106 normal controls aged over 50 years performed a task involving naming 50 famous people shown in pictures. Univariate and multivariate regression analyses were used to study the relationships between aMCI and semantic and phonological measures in the TOT paradigm. Univariate regression analyses revealed that all TOT measures significantly predicted aMCI. Multivariate analysis of all these measures correctly classified 70% of controls (specificity) and 71.6% of aMCI patients (sensitivity), with an AUC (area under curve ROC) value of 0.74, but only the phonological measure remained significant. This classification value was similar to that obtained with the Semantic verbal fluency test. TOTs for proper names may effectively discriminate aMCI patients from normal controls through measures that represent one of the naming processes affected, that is, phonological access.
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Goroshin, Samuel; Lee, John H. S.
2001-01-01
A cooperative study is being carried out between Iowa State University and McGill University. The new study concerns wall and particle quenching effects in particle-gas mixtures. The primary objective is to measure and interpret flame quenching distances, flammability limits, and burning velocities in particulate suspensions. A secondary objective is to measure particle slip velocities and particle velocity distribution as these influence flame propagation. Two suspension techniques will be utilized and compared: (1) electric particle suspension/EPS; and (2) flow dispersion. Microgravity tests will permit testing of larger particles and higher and more uniform dust concentrations than is possible in normal gravity.
Single-particle spectral functions in the normal phase of a strongly attractive Bose-Fermi mixture
NASA Astrophysics Data System (ADS)
Fratini, E.; Pieri, P.
2013-07-01
We calculate the single-particle spectral functions and quasiparticle dispersions for a Bose-Fermi mixture when the boson-fermion attraction is sufficiently strong to suppress completely the condensation of bosons at zero temperature. Within a T-matrix diagrammatic approach, we vary the boson-fermion attraction from the critical value where the boson condensate first disappears to the strongly attractive (molecular) regime and study the effect of both mass and density imbalance on the spectral weights and dispersions. An interesting spectrum of particle-hole excitations mixing two different Fermi surfaces is found. These unconventional excitations could be produced and explored experimentally with radio-frequency spectroscopy.
Fuentes, María S; Briceño, Gabriela E; Saez, Juliana M; Benimeli, Claudia S; Diez, María C; Amoroso, María J
2013-01-01
Pesticides are normally used to control specific pests and to increase the productivity in crops; as a result, soils are contaminated with mixtures of pesticides. In this work, the ability of Streptomyces strains (either as pure or mixed cultures) to remove pentachlorophenol and chlorpyrifos was studied. The antagonism among the strains and their tolerance to the toxic mixture was evaluated. Results revealed that the strains did not have any antagonistic effects and showed tolerance against the pesticides mixture. In fact, the growth of mixed cultures was significantly higher than in pure cultures. Moreover, a pure culture (Streptomyces sp. A5) and a quadruple culture had the highest pentachlorophenol removal percentages (10.6% and 10.1%, resp.), while Streptomyces sp. M7 presented the best chlorpyrifos removal (99.2%). Mixed culture of all Streptomyces spp. when assayed either as free or immobilized cells showed chlorpyrifos removal percentages of 40.17% and 71.05%, respectively, and for pentachlorophenol 5.24% and 14.72%, respectively, suggesting better removal of both pesticides by using immobilized cells. These results reveal that environments contaminated with mixtures of xenobiotics could be successfully cleaned up by using either free or immobilized cultures of Streptomyces, through in situ or ex situ remediation techniques.
Veeramachaneni, D N R; Palmer, J S; Amann, R P; Pau, K-Y F
2007-01-01
Rabbit does (7-9 per group) were treated daily per orum from gestation day 15 through post-natal week 4 to provide per kg body wt 25 micaromol (low) or 250 micromol (high) p,p'-DDT or a mixture of DDT and vinclozolin (12.5 and 125 micromol each). Developmental as well as post-pubertal reproductive sequelae of male progeny were studied. Testicular descent in some pups was impaired by DDT. Serum LH or testosterone was not affected. FSH was lower in mixture- but not in DDT-exposed rabbits. Lack of sexual interest, penile erection and ejaculation were observed in some mixture rabbits. Sperm counts were unaffected, but morphologically normal spermatozoa were fewer; nuclear and acrosomal morphogenesis was disrupted. Atypical germ cells resembling carcinoma in situ were found. Also considering data for vinclozolin [Veeramachaneni DNR, Palmer JS, Amann RP, Kane CM, Higuchi TT, Pau K-YF. Disruption of sexual function, FSH secretion, and spermiogenesis in rabbits following developmental exposure to vinclozolin, a fungicide. Reproduction 2006;131:805-16], we concluded that DDT causes cryptorchidism and germ cell atypia, vinclozolin permanently disrupts FSH secretion and sexual function, and the mixture causes the full spectrum of dysgenesis.
Fuentes, María S.; Briceño, Gabriela E.; Saez, Juliana M.; Benimeli, Claudia S.; Diez, María C.; Amoroso, María J.
2013-01-01
Pesticides are normally used to control specific pests and to increase the productivity in crops; as a result, soils are contaminated with mixtures of pesticides. In this work, the ability of Streptomyces strains (either as pure or mixed cultures) to remove pentachlorophenol and chlorpyrifos was studied. The antagonism among the strains and their tolerance to the toxic mixture was evaluated. Results revealed that the strains did not have any antagonistic effects and showed tolerance against the pesticides mixture. In fact, the growth of mixed cultures was significantly higher than in pure cultures. Moreover, a pure culture (Streptomyces sp. A5) and a quadruple culture had the highest pentachlorophenol removal percentages (10.6% and 10.1%, resp.), while Streptomyces sp. M7 presented the best chlorpyrifos removal (99.2%). Mixed culture of all Streptomyces spp. when assayed either as free or immobilized cells showed chlorpyrifos removal percentages of 40.17% and 71.05%, respectively, and for pentachlorophenol 5.24% and 14.72%, respectively, suggesting better removal of both pesticides by using immobilized cells. These results reveal that environments contaminated with mixtures of xenobiotics could be successfully cleaned up by using either free or immobilized cultures of Streptomyces, through in situ or ex situ remediation techniques. PMID:23865051
Edinçliler, Ayşe; Baykal, Gökhan; Saygili, Altug
2010-06-01
Use of the processed used tires in embankment construction is becoming an accepted way of beneficially recycling scrap tires due to shortages of natural mineral resources and increasing waste disposal costs. Using these used tires in construction requires an awareness of the properties and the limitations associated with their use. The main objective of this paper is to assess the different processing techniques on the mechanical properties of used tires-sand mixtures to improve the engineering properties of the available soil. In the first part, a literature study on the mechanical properties of the processed used tires such as tire shreds, tire chips, tire buffings and their mixtures with sand are summarized. In the second part, large-scale direct shear tests are performed to evaluate shear strength of tire crumb-sand mixtures where information is not readily available in the literature. The test results with tire crumb were compared with the other processed used tire-sand mixtures. Sand-used tire mixtures have higher shear strength than that of the sand alone and the shear strength parameters depend on the processing conditions of used tires. Three factors are found to significantly affect the mechanical properties: normal stress, processing techniques, and the used tire content. Copyright 2009. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Fini, Jean-Baptiste; Mughal, Bilal B.; Le Mével, Sébastien; Leemans, Michelle; Lettmann, Mélodie; Spirhanzlova, Petra; Affaticati, Pierre; Jenett, Arnim; Demeneix, Barbara A.
2017-03-01
Thyroid hormones are essential for normal brain development in vertebrates. In humans, abnormal maternal thyroid hormone levels during early pregnancy are associated with decreased offspring IQ and modified brain structure. As numerous environmental chemicals disrupt thyroid hormone signalling, we questioned whether exposure to ubiquitous chemicals affects thyroid hormone responses during early neurogenesis. We established a mixture of 15 common chemicals at concentrations reported in human amniotic fluid. An in vivo larval reporter (GFP) assay served to determine integrated thyroid hormone transcriptional responses. Dose-dependent effects of short-term (72 h) exposure to single chemicals and the mixture were found. qPCR on dissected brains showed significant changes in thyroid hormone-related genes including receptors, deiodinases and neural differentiation markers. Further, exposure to mixture also modified neural proliferation as well as neuron and oligodendrocyte size. Finally, exposed tadpoles showed behavioural responses with dose-dependent reductions in mobility. In conclusion, exposure to a mixture of ubiquitous chemicals at concentrations found in human amniotic fluid affect thyroid hormone-dependent transcription, gene expression, brain development and behaviour in early embryogenesis. As thyroid hormone signalling is strongly conserved across vertebrates the results suggest that ubiquitous chemical mixtures could be exerting adverse effects on foetal human brain development.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Latent Partially Ordered Classification Models and Normal Mixtures
ERIC Educational Resources Information Center
Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith
2013-01-01
Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…
The discovery of chlorination and chloramination by-products other than the regulated trihalomethanes and haloacetic acids has created a need for short-term in vitro assays to address toxicities that might be associated with human exposure. Approximately 600 disinfection by-produ...
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Müller, E; Giehl, A; Schwarzmann, G; Sandhoff, K; Blume, A
1996-09-01
Fourier transform infrared (FTIR) attenuated total reflection (ATR) spectroscopy was used to elucidate the hydration behavior and molecular order of phospholipid/ganglioside bilayers. We examined dry and hydrated films of the gangliosides GM1, deacetyl-GM1, lyso-GM1, deacetyllyso-GM1, and GM3 and oriented mixed films of these gangliosides with 1,2-dimyristoyl-sn-glycero-3-phosphorylcholine (DMPC) using polarized light. Analysis of the amide I frequencies reveals that the amide groups are involved in intermolecular interactions via hydrogen bonds of varying strengths. The tilt angle of the acyl chains of the lipids in mixed films was determined as a function of ganglioside structure. Deacetylation of the sialic acid in the headgroup has a stronger influence on the tilt angle than the removal of the ganglioside fatty acid. The phase behavior was examined by FTIR ATR spectroscopy and by differential scanning calorimetry (DSC) measurements on lipid suspensions. At the same molar concentration, lyso-gangliosides have less effect on changes of transition temperature compared to the double-chain analogs. Distinct differences in the amide band shapes were observed between mixtures with lyso-gangliosides and normal double-chain gangliosides. Determined from the dicroic ratio RATR, the orientation of the COO- group in all DMPC/ganglioside mixtures was found to be relatively fixed with respect to the membrane normal. In 4:1 mixtures of DMPC with GM1 and deacetyl-GM1, the binding of Ca2+ leads to a slight decrease in chain tilt in the gel phase, probably caused by a dehydration of the membrane-water interface. In mixtures of DMPC with GM3 and deacetyl-lyso-GM1, a slight increase in chain tilt is observed. The chain tilt in DMPC/lyso-GM1 mixtures is unchanged. Analysis of the COO- band reveals that Ca2+ does not bind to the carboxylate group of the sialic acid of GM1 and deacetyl-GM1, the mixtures in which a decrease in chain tilt was observed. Binding to the sialic acid was only observed for mixtures of DMPC with GM3, lyso-GM1, and deacetyl-lyso-GM1. Ca2+ obviously accumulates at the bilayer-water interface and leads to partial dehydration of the headgroup region in the gel as well as in the liquid-crystalline phase. This can be concluded from the changes in the amide I band shapes. With the exception of DMPC/deacetyl-GM1, the effects on the ester C==O bands are small. The addition of Ca2+ has minor effects on the phase behavior, with the exception of the DMPC/GM1 mixture.
Plurihormonal cells of normal anterior pituitary: Facts and conclusions
Mitrofanova, Lubov B.; Konovalov, Petr V.; Krylova, Julia S.; Polyakova, Victoria O.; Kvetnoy, Igor M.
2017-01-01
Introduction plurihormonality of pituitary adenomas is an ability of adenoma cells to produce more than one hormone. After the immunohistochemical analysis had become a routine part of the morphological study, a great number of adenomas appeared to be multihormonal in actual practice. We hypothesize that the same cells of a normal pituitary gland releases several hormones simultaneously. Objective To analyse a possible co-expression of hormones by the cells of the normal anterior pituitary of adult humans in autopsy material. Materials and methods We studied 10 pituitary glands of 4 women and 6 men with cardiovascular and oncological diseases. Double staining immunohistochemistry using 11 hormone combinations was performed in all the cases. These combinations were: prolactin/thyroid-stimulating hormone (TSH), prolactin/luteinizing hormone (LH), prolactin/follicle-stimulating hormone (FSH), prolactin/adrenocorticotropic hormone (ACTH), growth hormone (GH)/TSH, GH/LH, GH/FSH, GH/ACTH, TSH/LH, TSH/FSH, TSH/ACTH. Laser Confocal Scanning Microscopy with a mixture of primary antibodies was performed in 2 cases. These mixtures were ACTH/prolactin, FSH/prolactin, TSH/prolactin, ACTH/GH, and FSH/GH. Results We found that the same cells of the normal adenohypophysis can co-express prolactin with ACTH, TSH, FSH, LH; GH with ACTH, TSH, FSH, LH, and TSH with ACTH, FSH, LH. The comparison of the average co-expression coefficients of prolactin, GH and TSH with other hormones showed that the TSH co-expression coefficient was significantly the least (9,5±6,9%; 9,6±7,8%; 1,0±1,3% correspondingly). Conclusion Plurihormonality of normal adenohypophysis is an actually existing phenomenon. Identification of different hormones in pituitary adenomas enables to find new ways to improve both diagnostic process and targeted treatment. PMID:28418929
Arase, Shuntaro; Horie, Kanta; Kato, Takashi; Noda, Akira; Mito, Yasuhiro; Takahashi, Masatoshi; Yanagisawa, Toshinobu
2016-10-21
Multivariate curve resolution-alternating least squares (MCR-ALS) method was investigated for its potential to accelerate pharmaceutical research and development. The fast and efficient separation of complex mixtures consisting of multiple components, including impurities as well as major drug substances, remains a challenging application for liquid chromatography in the field of pharmaceutical analysis. In this paper we suggest an integrated analysis algorithm functioning on a matrix of data generated from HPLC coupled with photo-diode array detector (HPLC-PDA) and consisting of the mathematical program for the developed multivariate curve resolution method using an expectation maximization (EM) algorithm with a bidirectional exponentially modified Gaussian (BEMG) model function as a constraint for chromatograms and numerous PDA spectra aligned with time axis. The algorithm provided less than ±1.0% error between true and separated peak area values at resolution (R s ) of 0.6 using simulation data for a three-component mixture with an elution order of a/b/c with similarity (a/b)=0.8410, (b/c)=0.9123 and (a/c)=0.9809 of spectra at peak apex. This software concept provides fast and robust separation analysis even when method development efforts fail to achieve complete separation of the target peaks. Additionally, this approach is potentially applicable to peak deconvolution, allowing quantitative analysis of co-eluted compounds having exactly the same molecular weight. This is complementary to the use of LC-MS to perform quantitative analysis on co-eluted compounds using selected ions to differentiate the proportion of response attributable to each compound. Copyright © 2016 Elsevier B.V. All rights reserved.
Organic solvent exposure and hearing loss in a cohort of aluminium workers.
Rabinowitz, P M; Galusha, D; Slade, M D; Dixon-Ernst, C; O'Neill, A; Fiellin, M; Cullen, M R
2008-04-01
Organic solvent exposure has been shown to cause hearing loss in animals and humans. Less is known about the risk of hearing loss due to solvent exposures typically found in US industry. The authors performed a retrospective cohort study to examine the relationship between solvent exposure and hearing loss in US aluminium industry workers. A cohort of 1319 workers aged 35 years or less at inception was followed for 5 years. Linkage of employment, industrial hygiene and audiometric surveillance records allowed for estimation of noise and solvent exposures and hearing loss rates over the study period. Study subjects were classified as "solvent exposed" or not, on the basis of industrial hygiene records linked with individual job histories. High frequency hearing loss was modelled as both a continuous and a dichotomous outcome. Typical solvent exposures involved mixtures of xylene, toluene and/or methyl ethyl ketone (MEK). Recorded solvent exposure levels varied widely both within and between jobs. In a multivariate logistic model, risk factors for high frequency hearing loss included age (OR = 1.06, p = 0.004), hunting or shooting (OR = 1.35, p = 0.049), noisy hobbies (OR = 1.74, p = 0.01), baseline hearing level (OR = 1.04, p<0.001) and solvent exposure (OR = 1.87, p = 0.004). A multivariate linear regression analysis similarly found significant associations between high frequency hearing loss and age (p<0.001), hunting or shooting (p<0.001), noisy hobbies (p = 0.03), solvent exposure (p<0.001) and baseline hearing (p = 0.03). These results suggest that occupational exposure to organic solvent mixtures is a risk factor for high frequency hearing loss, although the data do not allow conclusions about dose-response relationships. Industries with solvent-exposed workers should include such workers in hearing conservation programs.
Annotating novel genes by integrating synthetic lethals and genomic information
Schöner, Daniel; Kalisch, Markus; Leisner, Christian; Meier, Lukas; Sohrmann, Marc; Faty, Mahamadou; Barral, Yves; Peter, Matthias; Gruissem, Wilhelm; Bühlmann, Peter
2008-01-01
Background Large scale screening for synthetic lethality serves as a common tool in yeast genetics to systematically search for genes that play a role in specific biological processes. Often the amounts of data resulting from a single large scale screen far exceed the capacities of experimental characterization of every identified target. Thus, there is need for computational tools that select promising candidate genes in order to reduce the number of follow-up experiments to a manageable size. Results We analyze synthetic lethality data for arp1 and jnm1, two spindle migration genes, in order to identify novel members in this process. To this end, we use an unsupervised statistical method that integrates additional information from biological data sources, such as gene expression, phenotypic profiling, RNA degradation and sequence similarity. Different from existing methods that require large amounts of synthetic lethal data, our method merely relies on synthetic lethality information from two single screens. Using a Multivariate Gaussian Mixture Model, we determine the best subset of features that assign the target genes to two groups. The approach identifies a small group of genes as candidates involved in spindle migration. Experimental testing confirms the majority of our candidates and we present she1 (YBL031W) as a novel gene involved in spindle migration. We applied the statistical methodology also to TOR2 signaling as another example. Conclusion We demonstrate the general use of Multivariate Gaussian Mixture Modeling for selecting candidate genes for experimental characterization from synthetic lethality data sets. For the given example, integration of different data sources contributes to the identification of genetic interaction partners of arp1 and jnm1 that play a role in the same biological process. PMID:18194531
Considerations in cross-validation type density smoothing with a look at some data
NASA Technical Reports Server (NTRS)
Schuster, E. F.
1982-01-01
Experience gained in applying nonparametric maximum likelihood techniques of density estimation to judge the comparative quality of various estimators is reported. Two invariate data sets of one hundered samples (one Cauchy, one natural normal) are considered as well as studies in the multivariate case.
Fitting and Testing Conditional Multinormal Partial Credit Models
ERIC Educational Resources Information Center
Hessen, David J.
2012-01-01
A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…
Simultaneous Inference Procedures for Means.
ERIC Educational Resources Information Center
Krishnaiah, P. R.
Some aspects of simultaneous tests for means are reviewed. Specifically, the comparison of univariate or multivariate normal populations based on the values of the means or mean vectors when the variances or covariance matrices are equal is discussed. Tukey's and Dunnett's tests for multiple comparisons of means, Scheffe's method of examining…
Disfluency in Spasmodic Dysphonia: A Multivariate Analysis.
ERIC Educational Resources Information Center
Cannito, Michael P.; Burch, Annette Renee; Watts, Christopher; Rappold, Patrick W.; Hood, Stephen B.; Sherrard, Kyla
1997-01-01
This study examined visual analog scaling judgments of disfluency by normal listeners in response to oral reading by 20 adults with spasmodic dysphonia (SD) and nondysphonic controls. Findings suggest that although dysfluency is not a defining feature of SD, it does contribute significantly to the overall clinical impression of severity of the…
Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
Andre, C; Farcet, J P; Oudhriri, N; Gourdin, M F; Bouguet, J; Reyes, F
1983-01-01
The lymphocyte colony forming capacity of peripheral blood mononuclear cells from normal controls and from two patients with chronic OKT8+ lymphocytic leukaemia was determined in agar culture under PHA stimulation. The number and size of the colonies in patients were reduced compared to normal. The lymphocytic phenotype of colony cells was studied with monoclonal antibodies in colonies harvested from agar culture and in colonies expanded in liquid culture in the presence of TCGF. This study was performed in individual colonies and in pooled colonies. Colonies from normal controls contained a mixture of the OKT4+ and OKT8+ lymphocyte subsets. In contrast, colonies from the two patients contained essentially OKT4+ lymphocytes. The data indicate that, in the patients, progenitors of the OKT8+ subset are unresponsive to normal proliferative and/or differentiative stimuli under the present culture conditions. PMID:6606509
Precision wood particle feedstocks
Dooley, James H; Lanning, David N
2013-07-30
Wood particles having fibers aligned in a grain, wherein: the wood particles are characterized by a length dimension (L) aligned substantially parallel to the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L; the L.times.H dimensions define two side surfaces characterized by substantially intact longitudinally arrayed fibers; the W.times.H dimensions define two cross-grain end surfaces characterized individually as aligned either normal to the grain or oblique to the grain; the L.times.W dimensions define two substantially parallel top and bottom surfaces; and, a majority of the W.times.H surfaces in the mixture of wood particles have end checking.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.