Sample records for multivariate normal approximation

  1. Analyzing Multivariate Repeated Measures Designs: A Comparison of Two Approximate Degrees of Freedom Procedures

    ERIC Educational Resources Information Center

    Lix, Lisa M.; Algina, James; Keselman, H. J.

    2003-01-01

    The approximate degrees of freedom Welch-James (WJ) and Brown-Forsythe (BF) procedures for testing within-subjects effects in multivariate groups by trials repeated measures designs were investigated under departures from covariance homogeneity and normality. Empirical Type I error and power rates were obtained for least-squares estimators and…

  2. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  3. A Comparison of the Bootstrap-F, Improved General Approximation, and Brown-Forsythe Multivariate Approaches in a Mixed Repeated Measures Design

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Izquierdo, Marcelino Cuesta; Garcia, M. Paula Fernandez; Diez, F. Javier Herrero

    2006-01-01

    The authors compare the operating characteristics of the bootstrap-F approach, a direct extension of the work of Berkovits, Hancock, and Nevitt, with Huynh's improved general approximation (IGA) and the Brown-Forsythe (BF) multivariate approach in a mixed repeated measures design when normality and multisample sphericity assumptions do not hold.…

  4. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  5. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  6. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  7. Problems with Multivariate Normality: Can the Multivariate Bootstrap Help?

    ERIC Educational Resources Information Center

    Thompson, Bruce

    Multivariate normality is required for some statistical tests. This paper explores the implications of violating the assumption of multivariate normality and illustrates a graphical procedure for evaluating multivariate normality. The logic for using the multivariate bootstrap is presented. The multivariate bootstrap can be used when distribution…

  8. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  9. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  10. Optimal False Discovery Rate Control for Dependent Data

    PubMed Central

    Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe

    2013-01-01

    This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870

  11. Robust tests for multivariate factorial designs under heteroscedasticity.

    PubMed

    Vallejo, Guillermo; Ato, Manuel

    2012-06-01

    The question of how to analyze several multivariate normal mean vectors when normality and covariance homogeneity assumptions are violated is considered in this article. For the two-way MANOVA layout, we address this problem adapting results presented by Brunner, Dette, and Munk (BDM; 1997) and Vallejo and Ato (modified Brown-Forsythe [MBF]; 2006) in the context of univariate factorial and split-plot designs and a multivariate version of the linear model (MLM) to accommodate heterogeneous data. Furthermore, we compare these procedures with the Welch-James (WJ) approximate degrees of freedom multivariate statistics based on ordinary least squares via Monte Carlo simulation. Our numerical studies show that of the methods evaluated, only the modified versions of the BDM and MBF procedures were robust to violations of underlying assumptions. The MLM approach was only occasionally liberal, and then by only a small amount, whereas the WJ procedure was often liberal if the interactive effects were involved in the design, particularly when the number of dependent variables increased and total sample size was small. On the other hand, it was also found that the MLM procedure was uniformly more powerful than its most direct competitors. The overall success rate was 22.4% for the BDM, 36.3% for the MBF, and 45.0% for the MLM.

  12. Approximating Multivariate Normal Orthant Probabilities Using the Clark Algorithm.

    DTIC Science & Technology

    1987-07-15

    Kent Eaton Army Research Institute Dr. Hans Crombag 5001 Eisenhower Avenue University of Leyden Alexandria, VA 22333 Education Research Center...Boerhaavelaan 2 Dr. John M. Eddins 2334 EN Leyden University of Illinois The NETHERLANDS 252 Engineering Research Laboratory Mr. Timothy Davey 103 South...Education and Training Ms. Kathleen Moreno Naval Air Station Navy Personnel R&D Center Pensacola, FL 32508 Code 62 San Diego, CA 92152-6800 Dr. Gary Marco

  13. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  14. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  15. The Effect of the Multivariate Box-Cox Transformation on the Power of MANOVA.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    Most of the multivariate statistical techniques rely on the assumption of multivariate normality. The effects of non-normality on multivariate tests are assumed to be negligible when variance-covariance matrices and sample sizes are equal. Therefore, in practice, investigators do not usually attempt to remove non-normality. In this simulation…

  16. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  17. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  18. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  19. Multivariate stochastic simulation with subjective multivariate normal distributions

    Treesearch

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  20. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  1. Multiple imputation for handling missing outcome data when estimating the relative risk.

    PubMed

    Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-09-06

    Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.

  2. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  3. Tissue-Negative Transient Ischemic Attack: Is There a Role for Perfusion MRI?

    PubMed

    Grams, Raymond W; Kidwell, Chelsea S; Doshi, Amish H; Drake, Kendra; Becker, Jennifer; Coull, Bruce M; Nael, Kambiz

    2016-07-01

    Approximately 60% of patients with a clinical transient ischemic attack (TIA) do not have DWI evidence of cerebral ischemia. The purpose of this study was to assess the added diagnostic value of perfusion MRI in the evaluation of patients with TIA who have normal DWI findings. The inclusion criteria for this retrospective study were clinical presentation of TIA at admission with a discharge diagnosis of TIA confirmed by a stroke neurologist, MRI including both DWI and perfusion-weighted imaging within 48 hours of symptom onset, and no DWI lesion. Cerebral blood flow (CBF) and time to maximum of the residue function (Tmax) maps were evaluated independently by two observers. Multivariate analysis was used to assess perfusion findings; clinical variables; age, blood pressure, clinical symptoms, diabetes (ABCD2) score; duration of TIA; and time between MRI and onset and resolution of symptoms. Fifty-two patients (33 women, 19 men; age range, 20-95 years) met the inclusion criteria. A regional perfusion abnormality was identified on either Tmax or CBF maps of 12 of 52 (23%) patients. Seven (58%) of the patients with perfusion abnormalities had hypoperfused lesions best detected on Tmax maps; the other five had hyperperfusion best detected on CBF maps. In 11 of 12 (92%) patients with abnormal perfusion MRI findings, the regional perfusion deficit correlated with the initial neurologic deficits. Multivariable analysis revealed no significant difference in demographics, ABCD2 scores, or presentation characteristics between patients with and those without perfusion abnormalities. Perfusion MRI that includes Tmax and CBF parametric maps adds diagnostic value by depicting regions with delayed perfusion or postischemic hyperperfusion in approximately one-fourth of TIA patients who have normal DWI findings.

  4. Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.

    PubMed

    Thulin, M

    2016-09-10

    Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  6. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  7. Regional magnetic resonance imaging measures for multivariate analysis in Alzheimer's disease and mild cognitive impairment.

    PubMed

    Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew

    2013-01-01

    Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.

  8. NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES

    PubMed Central

    He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.

    2017-01-01

    Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225

  9. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  10. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The classification of secondary colorectal liver cancer in human biopsy samples using angular dispersive x-ray diffraction and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Theodorakou, Chrysoula; Farquharson, Michael J.

    2009-08-01

    The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.

  12. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  13. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  14. Multivariate test power approximations for balanced linear mixed models in studies with missing data.

    PubMed

    Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H

    2016-07-30

    Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. On Some Multiple Decision Problems

    DTIC Science & Technology

    1976-08-01

    parameter space. Some recent results in the area of subset selection formulation are Gnanadesikan and Gupta [28], Gupta and Studden [43], Gupta and...York, pp. 363-376. [27) Gnanadesikan , M. (1966). Some Selection and Ranking Procedures for Multivariate Normal Populations. Ph.D. Thesis. Dept. of...Statist., Purdue Univ., West Lafayette, Indiana 47907. [28) Gnanadesikan , M. and Gupta, S. S. (1970). Selection procedures for multivariate normal

  16. Using monolingual neuropsychological test norms with bilingual Hispanic americans: application of an individual comparison standard.

    PubMed

    Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira

    2012-05-01

    Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.

  17. A Robust Bayesian Approach for Structural Equation Models with Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Xia, Ye-Mao

    2008-01-01

    In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…

  18. Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato

    2007-01-01

    This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…

  19. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    NASA Astrophysics Data System (ADS)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  20. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  1. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  2. On measures of association among genetic variables

    PubMed Central

    Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner

    2012-01-01

    Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500

  3. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  4. Atrial Electrogram Fractionation Distribution before and after Pulmonary Vein Isolation in Human Persistent Atrial Fibrillation-A Retrospective Multivariate Statistical Analysis.

    PubMed

    Almeida, Tiago P; Chu, Gavin S; Li, Xin; Dastagir, Nawshin; Tuan, Jiun H; Stafford, Peter J; Schlindwein, Fernando S; Ng, G André

    2017-01-01

    Purpose: Complex fractionated atrial electrograms (CFAE)-guided ablation after pulmonary vein isolation (PVI) has been used for persistent atrial fibrillation (persAF) therapy. This strategy has shown suboptimal outcomes due to, among other factors, undetected changes in the atrial tissue following PVI. In the present work, we investigate CFAE distribution before and after PVI in patients with persAF using a multivariate statistical model. Methods: 207 pairs of atrial electrograms (AEGs) were collected before and after PVI respectively, from corresponding LA regions in 18 persAF patients. Twelve attributes were measured from the AEGs, before and after PVI. Statistical models based on multivariate analysis of variance (MANOVA) and linear discriminant analysis (LDA) have been used to characterize the atrial regions and AEGs. Results: PVI significantly reduced CFAEs in the LA (70 vs. 40%; P < 0.0001). Four types of LA regions were identified, based on the AEGs characteristics: (i) fractionated before PVI that remained fractionated after PVI (31% of the collected points); (ii) fractionated that converted to normal (39%); (iii) normal prior to PVI that became fractionated (9%) and; (iv) normal that remained normal (21%). Individually, the attributes failed to distinguish these LA regions, but multivariate statistical models were effective in their discrimination ( P < 0.0001). Conclusion: Our results have unveiled that there are LA regions resistant to PVI, while others are affected by it. Although, traditional methods were unable to identify these different regions, the proposed multivariate statistical model discriminated LA regions resistant to PVI from those affected by it without prior ablation information.

  5. Spatial patterns of brain atrophy in MCI patients, identified via high-dimensional pattern classification, predict subsequent cognitive decline

    PubMed Central

    Fan, Yong; Batmanghelich, Nematollah; Clark, Chris M.; Davatzikos, Christos

    2010-01-01

    Spatial patterns of brain atrophy in mild cognitive impairment (MCI) and Alzheimer’s disease (AD) were measured via methods of computational neuroanatomy. These patterns were spatially complex and involved many brain regions. In addition to the hippocampus and the medial temporal lobe gray matter, a number of other regions displayed significant atrophy, including orbitofrontal and medial-prefrontal grey matter, cingulate (mainly posterior), insula, uncus, and temporal lobe white matter. Approximately 2/3 of the MCI group presented patterns of atrophy that overlapped with AD, whereas the remaining 1/3 overlapped with cognitively normal individuals, thereby indicating that some, but not all, MCI patients have significant and extensive brain atrophy in this cohort of MCI patients. Importantly, the group with AD-like patterns presented much higher rate of MMSE decline in follow-up visits; conversely, pattern classification provided relatively high classification accuracy (87%) of the individuals that presented relatively higher MMSE decline within a year from baseline. High-dimensional pattern classification, a nonlinear multivariate analysis, provided measures of structural abnormality that can potentially be useful for individual patient classification, as well as for predicting progression and examining multivariate relationships in group analyses. PMID:18053747

  6. Asymptotic Distribution of the Likelihood Ratio Test Statistic for Sphericity of Complex Multivariate Normal Distribution.

    DTIC Science & Technology

    1981-08-01

    RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli

  7. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    PubMed

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Bayesian Estimation of Multivariate Latent Regression Models: Gauss versus Laplace

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Park, Trevor

    2017-01-01

    A latent multivariate regression model is developed that employs a generalized asymmetric Laplace (GAL) prior distribution for regression coefficients. The model is designed for high-dimensional applications where an approximate sparsity condition is satisfied, such that many regression coefficients are near zero after accounting for all the model…

  9. Opium addiction as an independent risk factor for coronary microvascular dysfunction: A case-control study of 250 consecutive patients with slow-flow angina.

    PubMed

    Esmaeili Nadimi, Ali; Pour Amiri, Farah; Sheikh Fathollahi, Mahmood; Hassanshahi, Gholamhossien; Ahmadi, Zahra; Sayadi, Ahmad Reza

    2016-09-15

    Approximately 20% to 30% of patients who undergo coronary angiography for assessment of typical cardiac chest pain display microvascular coronary dysfunction (MCD). This study aimed to determine potential relationships between baseline clinical characteristics and likelihood of MCD diagnosis in a large group of patients with stable angina symptoms, positive exercise test and angiographic ally normal epicardial coronary arteries. This cross-sectional study included 250 Iranian with documented evidence of cardiac ischemia on exercise testing, class I or II indication for coronary angiography, and either: (1) angiographically normal coronary arteries and diagnosis of MCD with slow-flow phenomenon, or (2) normal angiogram and no evidence of MCD. All patients completed a questionnaire designed to capture key data including clinical demographics, past medical history, and social factors. Data was evaluated using single and multivariable logistic regression models to identify potential individual patient factors that might help to predict a diagnosis of MCD. 125 (11.2% of total) patients were subsequently diagnosed with MCD. 125 consecutive control subjects were selected for comparison. The mean age was similar among the two groups (52.38 vs. 53.26%, p=ns), but there was a higher proportion of men in the study group compared to control (42.4 vs. 27.2%, p=0.012). No significant relationships were observed between traditional cardiovascular risk factors (diabetes, hypertension, and dyslipidemia) or body mass index (BMI), and likelihood of MCD diagnosis. However, opium addiction was found to be an independent predictor of MCD on single and multivariable logistic regression model (OR=3.575, 95%CI: 1.418-9.016; p=0.0069). We observed a significant relationship between opium addiction and microvascular angina. This novel finding provides a potential mechanistic insight into the pathogenesis of MCD with slow-flow phenomenon. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  11. Principal component analysis-based pattern analysis of dose-volume histograms and influence on rectal toxicity.

    PubMed

    Söhn, Matthias; Alber, Markus; Yan, Di

    2007-09-01

    The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as "eigenmodes," which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe approximately 94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ( approximately 40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.

  12. Comparative multivariate analyses of transient otoacoustic emissions and distorsion products in normal and impaired hearing.

    PubMed

    Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel

    2015-01-01

    The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies.

  13. Comparative multivariate analyses of transient otoacoustic emissions and distorsion products in normal and impaired hearing

    PubMed Central

    STAMATE, MIRELA CRISTINA; TODOR, NICOLAE; COSGAREA, MARCEL

    2015-01-01

    Background and aim The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. Methods The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. Results We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Conclusion Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies. PMID:26733749

  14. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects.

    PubMed

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2016-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.

  15. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects

    PubMed Central

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2017-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896

  16. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  17. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  18. Distribution pattern of urine albumin creatinine ratio and the prevalence of high-normal levels in untreated asymptomatic non-diabetic hypertensive patients.

    PubMed

    Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo

    2011-01-01

    Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.

  19. Hot spots of multivariate extreme anomalies in Earth observations

    NASA Astrophysics Data System (ADS)

    Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.

    2016-12-01

    Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.

  20. Multivariate methods to visualise colour-space and colour discrimination data.

    PubMed

    Hastings, Gareth D; Rubin, Alan

    2015-01-01

    Despite most modern colour spaces treating colour as three-dimensional (3-D), colour data is usually not visualised in 3-D (and two-dimensional (2-D) projection-plane segments and multiple 2-D perspective views are used instead). The objectives of this article are firstly, to introduce a truly 3-D percept of colour space using stereo-pairs, secondly to view colour discrimination data using that platform, and thirdly to apply formal statistics and multivariate methods to analyse the data in 3-D. This is the first demonstration of the software that generated stereo-pairs of RGB colour space, as well as of a new computerised procedure that investigated colour discrimination by measuring colour just noticeable differences (JND). An initial pilot study and thorough investigation of instrument repeatability were performed. Thereafter, to demonstrate the capabilities of the software, five colour-normal and one colour-deficient subject were examined using the JND procedure and multivariate methods of data analysis. Scatter plots of responses were meaningfully examined in 3-D and were useful in evaluating multivariate normality as well as identifying outliers. The extent and direction of the difference between each JND response and the stimulus colour point was calculated and appreciated in 3-D. Ellipsoidal surfaces of constant probability density (distribution ellipsoids) were fitted to response data; the volumes of these ellipsoids appeared useful in differentiating the colour-deficient subject from the colour-normals. Hypothesis tests of variances and covariances showed many statistically significant differences between the results of the colour-deficient subject and those of the colour-normals, while far fewer differences were found when comparing within colour-normals. The 3-D visualisation of colour data using stereo-pairs, as well as the statistics and multivariate methods of analysis employed, were found to be unique and useful tools in the representation and study of colour. Many additional studies using these methods along with the JND and other procedures have been identified and will be reported in future publications. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  1. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  2. Multivariate classification of infrared spectra of cell and tissue samples

    DOEpatents

    Haaland, David M.; Jones, Howland D. T.; Thomas, Edward V.

    1997-01-01

    Multivariate classification techniques are applied to spectra from cell and tissue samples irradiated with infrared radiation to determine if the samples are normal or abnormal (cancerous). Mid and near infrared radiation can be used for in vivo and in vitro classifications using at least different wavelengths.

  3. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  4. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  5. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  6. Clustering of the human skeletal muscle fibers using linear programming and angular Hilbertian metrics.

    PubMed

    Neji, Radhouène; Besbes, Ahmed; Komodakis, Nikos; Deux, Jean-François; Maatouk, Mezri; Rahmouni, Alain; Bassez, Guillaume; Fleury, Gilles; Paragios, Nikos

    2009-01-01

    In this paper, we present a manifold clustering method fo the classification of fibers obtained from diffusion tensor images (DTI) of the human skeletal muscle. Using a linear programming formulation of prototype-based clustering, we propose a novel fiber classification algorithm over manifolds that circumvents the necessity to embed the data in low dimensional spaces and determines automatically the number of clusters. Furthermore, we propose the use of angular Hilbertian metrics between multivariate normal distributions to define a family of distances between tensors that we generalize to fibers. These metrics are used to approximate the geodesic distances over the fiber manifold. We also discuss the case where only geodesic distances to a reduced set of landmark fibers are available. The experimental validation of the method is done using a manually annotated significant dataset of DTI of the calf muscle for healthy and diseased subjects.

  7. Padé approximant for normal stress differences in large-amplitude oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.

    2018-04-01

    Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.

  8. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  9. Esophageal cancer detection based on tissue surface-enhanced Raman spectroscopy and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan

    2013-01-01

    The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.

  10. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  11. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  12. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  13. A flexible model for multivariate interval-censored survival times with complex correlation structure.

    PubMed

    Falcaro, Milena; Pickles, Andrew

    2007-02-10

    We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.

  14. Topics in Multivariate Approximation Theory.

    DTIC Science & Technology

    1982-05-01

    once that a continuous function f can be approximated from Sa :o span (N3 )B63 to within *(f, 131 ), with 13 t- sup3 e3 dian PS The simple approximation...N(C) 3- U P s P3AC 0 0 ) . Then, as in Lebesgue’s inequality, we could conclude that f - Qf - f-p - Q(f-p) , for all p e k k therefore I(f-0f) JCI 4 I

  15. Departure from Normality in Multivariate Normative Comparison: The Cramer Alternative for Hotelling's "T[squared]"

    ERIC Educational Resources Information Center

    Grasman, Raoul P. P. P.; Huizenga, Hilde M.; Geurts, Hilde M.

    2010-01-01

    Crawford and Howell (1998) have pointed out that the common practice of z-score inference on cognitive disability is inappropriate if a patient's performance on a task is compared with relatively few typical control individuals. Appropriate univariate and multivariate statistical tests have been proposed for these studies, but these are only valid…

  16. Multivariate Generalizations of Student's t-Distribution. ONR Technical Report. [Biometric Lab Report No. 90-3.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…

  17. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  18. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  19. Lifestyle and health status in a sample of Swedish women four years after pregnancy: a comparison of women with a history of normal pregnancy and women with a history of gestational diabetes mellitus.

    PubMed

    Persson, Margareta; Winkvist, Anna; Mogren, Ingrid

    2015-03-13

    Despite the recommendations to continue the regime of healthy food and physical activity (PA) postpartum for women with previous gestational diabetes mellitus (GDM), the scientific evidence reveals that these recommendations may not be complied to. This study compared lifestyle and health status in women whose pregnancy was complicated by GDM with women who had a normal pregnancy and delivery. The inclusion criteria were women with GDM (ICD-10: O24.4 A and O24.4B) and women with uncomplicated pregnancy and delivery in 2005 (ICD-10: O80.0). A random sample of women fulfilling the criteria (n = 882) were identified from the Swedish Medical Birth Register. A questionnaire was sent by mail to eligible women approximately four years after the pregnancy. A total of 444 women (50.8%) agreed to participate, 111 diagnosed with GDM in their pregnancy and 333 with normal pregnancy/delivery. Women with previous GDM were significantly older, reported higher body weight and less PA before the index pregnancy. No major differences between the groups were noticed regarding lifestyle at the follow-up. Overall, few participants fulfilled the national recommendations of PA and diet. At the follow-up, 19 participants had developed diabetes, all with previous GDM. Women with previous GDM reported significantly poorer self-rated health (SRH), higher level of sick-leave and more often using medication on regular basis. However, a history of GDM or having overt diabetes mellitus showed no association with poorer SRH in the multivariate analysis. Irregular eating habits, no regular PA, overweight/obesity, and regular use of medication were associated with poorer SRH in all participants. Suboptimal levels of PA, and fruit and vegetable consumption were found in a sample of women with a history of GDM as well as for women with normal pregnancy approximately four years after index pregnancy. Women with previous GDM seem to increase their PA after childbirth, but still they perform their PA at lower intensity than women with a history of normal pregnancy. Having GDM at index pregnancy or being diagnosed with overt diabetes mellitus at follow-up did not demonstrate associations with poorer SRH four years after delivery.

  20. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  1. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  2. Influence of Time-Series Normalization, Number of Nodes, Connectivity and Graph Measure Selection on Seizure-Onset Zone Localization from Intracranial EEG.

    PubMed

    van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge

    2018-04-26

    We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.

  3. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  4. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Novel method for hit-position reconstruction using voltage signals in plastic scintillators and its application to Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2014-11-01

    Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).

  6. Associations of Pre-transplant Prescription Narcotic Use with Clinical Complications after Kidney Transplantation

    PubMed Central

    Lentine, Krista L.; Lam, Ngan N.; Xiao, Huiling; Tuttle-Newhall, Janet E.; Axelrod, David; Brennan, Daniel C.; Dharnidharka, Vikas R.; Yuan, Hui; Nazzal, Mustafa; Zheng, Jie; Schnitzler, Mark A.

    2015-01-01

    Background Associations of narcotic use before kidney transplantation with post-transplant clinical outcomes are not well described. Methods We examined integrated national transplant registry, pharmacy records, and Medicare billing claims to follow 16,322 kidney transplant recipients, of whom 28.3% filled a narcotic prescription in the year before transplantation. Opioid analgesic fills were normalized to morphine equivalents (ME) and expressed as mg/kg exposures (approximate quartiles: 0.1– 1.7, 1.8–5.4, 5.5–23.7, and ≥23.8 mg/kg, respectively). Post-transplant cardiovascular, respiratory, neurological, accidents, substance abuse, and non-compliance events were identified using diagnosis codes on Medicare billing claims. Adjusted associations of ME level with post-transplant complications were quantified by multivariate Cox regression. Results The incidence of complications at 3 years post-transplant among those with the highest pre-transplant ME exposure compared to no use included: ventricular arrhythmias, 1.1% vs. 0.2% (p<0.001); cardiac arrest, 4.7% vs. 2.7% (p<0.05); hypotension, 14% vs. 8% (p<0.0001); hypercapnia, 1.6% vs. 0.9% (p<0.05); mental status changes, 5.3% vs. 2.7% (p<0.001); drug abuse/dependence, 7.0% vs. 1.7% (p<0.0001); alcohol abuse, 1.8% vs. 0.6% (p=0.0001); accidents, 0.9% vs. 0.3% (p<0.05); and non-compliance, 3.5% vs. 2.3% (p<0.05). In multivariate analyses, transplant recipients with the highest level of pre-transplant narcotic use had approximately 2-to-4-times the risks of post-transplant ventricular arrhythmias, mental status changes, drug abuse, alcohol abuse, and accidents compared with non-users, and 35% to 45% higher risks of cardiac arrest and hypotension. Conclusion Although associations may reflect underlying conditions or behaviors, high-level prescription narcotic use before kidney transplantation predicts increased risk of clinical complications after transplantation. PMID:25832723

  7. Multivariate multiscale entropy of financial markets

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun

    2017-11-01

    In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.

  8. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  9.  Alkaline phosphatase normalization is a biomarker of improved survival in primary sclerosing cholangitis.

    PubMed

    Hilscher, Moira; Enders, Felicity B; Carey, Elizabeth J; Lindor, Keith D; Tabibian, James H

    2016-01-01

     Introduction. Recent studies suggest that serum alkaline phosphatase may represent a prognostic biomarker in patients with primary sclerosing cholangitis. However, this association remains poorly understood. Therefore, the aim of this study was to investigate the prognostic significance and clinical correlates of alkaline phosphatase normalization in primary sclerosing cholangitis. This was a retrospective cohort study of patients with a new diagnosis of primary sclerosing cholangitis made at an academic medical center. The primary endpoint was time to hepatobiliaryneoplasia, liver transplantation, or liver-related death. Secondary endpoints included occurrence of and time to alkaline phosphatase normalization. Patients who did and did not achieve normalization were compared with respect to clinical characteristics and endpoint-free survival, and the association between normalization and the primary endpoint was assessed with univariate and multivariate Cox proportional-hazards analyses. Eighty six patients were included in the study, with a total of 755 patient-years of follow-up. Thirty-eight patients (44%) experienced alkaline phosphatase normalization within 12 months of diagnosis. Alkaline phosphatase normalization was associated with longer primary endpoint-free survival (p = 0.0032) and decreased risk of requiring liver transplantation (p = 0.033). Persistent normalization was associated with even fewer adverse endpoints as well as longer survival. In multivariate analyses, alkaline phosphatase normalization (adjusted hazard ratio 0.21, p = 0.012) and baseline bilirubin (adjusted hazard ratio 4.87, p = 0.029) were the only significant predictors of primary endpoint-free survival. Alkaline phosphatase normalization, particularly if persistent, represents a robust biomarker of improved long-term survival and decreased risk of requiring liver transplantation in patients with primary sclerosing cholangitis.

  10. Comparison of Two Procedures for Analyzing Small Sets of Repeated Measures Data

    ERIC Educational Resources Information Center

    Vallejo, Guillermo; Livacic-Rojas, Pablo

    2005-01-01

    This article compares two methods for analyzing small sets of repeated measures data under normal and non-normal heteroscedastic conditions: a mixed model approach with the Kenward-Roger correction and a multivariate extension of the modified Brown-Forsythe (BF) test. These procedures differ in their assumptions about the covariance structure of…

  11. Drunk driving detection based on classification of multivariate time series.

    PubMed

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  12. Advanced clinical interpretation of the Delis-Kaplan Executive Function System: multivariate base rates of low scores.

    PubMed

    Karr, Justin E; Garcia-Barrera, Mauricio A; Holdnack, James A; Iverson, Grant L

    2018-01-01

    Multivariate base rates allow for the simultaneous statistical interpretation of multiple test scores, quantifying the normal frequency of low scores on a test battery. This study provides multivariate base rates for the Delis-Kaplan Executive Function System (D-KEFS). The D-KEFS consists of 9 tests with 16 Total Achievement scores (i.e. primary indicators of executive function ability). Stratified by education and intelligence, multivariate base rates were derived for the full D-KEFS and an abbreviated four-test battery (i.e. Trail Making, Color-Word Interference, Verbal Fluency, and Tower Test) using the adult portion of the normative sample (ages 16-89). Multivariate base rates are provided for the full and four-test D-KEFS batteries, calculated using five low score cutoffs (i.e. ≤25th, 16th, 9th, 5th, and 2nd percentiles). Low scores occurred commonly among the D-KEFS normative sample, with 82.6 and 71.8% of participants obtaining at least one score ≤16th percentile for the full and four-test batteries, respectively. Intelligence and education were inversely related to low score frequency. The base rates provided herein allow clinicians to interpret multiple D-KEFS scores simultaneously for the full D-KEFS and an abbreviated battery of commonly administered tests. The use of these base rates will support clinicians when differentiating between normal variations in cognitive performance and true executive function deficits.

  13. Stewart analysis of apparently normal acid-base state in the critically ill.

    PubMed

    Moviat, Miriam; van den Boogaard, Mark; Intven, Femke; van der Voort, Peter; van der Hoeven, Hans; Pickkers, Peter

    2013-12-01

    This study aimed to describe Stewart parameters in critically ill patients with an apparently normal acid-base state and to determine the incidence of mixed metabolic acid-base disorders in these patients. We conducted a prospective, observational multicenter study of 312 consecutive Dutch intensive care unit patients with normal pH (7.35 ≤ pH ≤ 7.45) on days 3 to 5. Apparent (SIDa) and effective strong ion difference (SIDe) and strong ion gap (SIG) were calculated from 3 consecutive arterial blood samples. Multivariate linear regression analysis was performed to analyze factors potentially associated with levels of SIDa and SIG. A total of 137 patients (44%) were identified with an apparently normal acid-base state (normal pH and -2 < base excess < 2 and 35 < PaCO2 < 45 mm Hg). In this group, SIDa values were 36.6 ± 3.6 mEq/L, resulting from hyperchloremia (109 ± 4.6 mEq/L, sodium-chloride difference 30.0 ± 3.6 mEq/L); SIDe values were 33.5 ± 2.3 mEq/L, resulting from hypoalbuminemia (24.0 ± 6.2 g/L); and SIG values were 3.1 ± 3.1 mEq/L. During admission, base excess increased secondary to a decrease in SIG levels and, subsequently, an increase in SIDa levels. Levels of SIDa were associated with positive cation load, chloride load, and admission SIDa (multivariate r(2) = 0.40, P < .001). Levels of SIG were associated with kidney function, sepsis, and SIG levels at intensive care unit admission (multivariate r(2) = 0.28, P < .001). Intensive care unit patients with an apparently normal acid-base state have an underlying mixed metabolic acid-base disorder characterized by acidifying effects of a low SIDa (caused by hyperchloremia) and high SIG combined with the alkalinizing effect of hypoalbuminemia. © 2013.

  14. Multivariate analysis of cytokine profiles in pregnancy complications.

    PubMed

    Azizieh, Fawaz; Dingle, Kamaludin; Raghupathy, Raj; Johnson, Kjell; VanderPlas, Jacob; Ansari, Ali

    2018-03-01

    The immunoregulation to tolerate the semiallogeneic fetus during pregnancy includes a harmonious dynamic balance between anti- and pro-inflammatory cytokines. Several earlier studies reported significantly different levels and/or ratios of several cytokines in complicated pregnancy as compared to normal pregnancy. However, as cytokines operate in networks with potentially complex interactions, it is also interesting to compare groups with multi-cytokine data sets, with multivariate analysis. Such analysis will further examine how great the differences are, and which cytokines are more different than others. Various multivariate statistical tools, such as Cramer test, classification and regression trees, partial least squares regression figures, 2-dimensional Kolmogorov-Smirmov test, principal component analysis and gap statistic, were used to compare cytokine data of normal vs anomalous groups of different pregnancy complications. Multivariate analysis assisted in examining if the groups were different, how strongly they differed, in what ways they differed and further reported evidence for subgroups in 1 group (pregnancy-induced hypertension), possibly indicating multiple causes for the complication. This work contributes to a better understanding of cytokines interaction and may have important implications on targeting cytokine balance modulation or design of future medications or interventions that best direct management or prevention from an immunological approach. © 2018 The Authors. American Journal of Reproductive Immunology Published by John Wiley & Sons Ltd.

  15. An Integrable Approximation for the Fermi Pasta Ulam Lattice

    NASA Astrophysics Data System (ADS)

    Rink, Bob

    This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.

  16. Alternatives for using multivariate regression to adjust prospective payment rates

    PubMed Central

    Sheingold, Steven H.

    1990-01-01

    Multivariate regression analysis has been used in structuring three of the adjustments to Medicare's prospective payment rates. Because the indirect-teaching adjustment, the disproportionate-share adjustment, and the adjustment for large cities are responsible for distributing approximately $3 billion in payments each year, the specification of regression models for these adjustments is of critical importance. In this article, the application of regression for adjusting Medicare's prospective rates is discussed, and the implications that differing specifications could have for these adjustments are demonstrated. PMID:10113271

  17. Multivariate spline methods in surface fitting

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator); Schumaker, L. L.

    1984-01-01

    The use of spline functions in the development of classification algorithms is examined. In particular, a method is formulated for producing spline approximations to bivariate density functions where the density function is decribed by a histogram of measurements. The resulting approximations are then incorporated into a Bayesiaan classification procedure for which the Bayes decision regions and the probability of misclassification is readily computed. Some preliminary numerical results are presented to illustrate the method.

  18. Food intake does not differ between obese women who are metabolically healthy or abnormal.

    PubMed

    Kimokoti, Ruth W; Judd, Suzanne E; Shikany, James M; Newby, P K

    2014-12-01

    Metabolically healthy obesity may confer lower risk of adverse health outcomes compared with abnormal obesity. Diet and race are postulated to influence the phenotype, but their roles and their interrelations on healthy obesity are unclear. We evaluated food intakes of metabolically healthy obese women in comparison to intakes of their metabolically healthy normal-weight and metabolically abnormal obese counterparts. This was a cross-sectional study in 6964 women of the REasons for Geographic And Racial Differences in Stroke (REGARDS) study. Participants were aged 45-98 y with a body mass index (BMI; kg/m(2)) ≥18.5 and free of cardiovascular diseases, diabetes, and cancer. Food intake was collected by using a food-frequency questionnaire. BMI phenotypes were defined by using metabolic syndrome (MetS) and homeostasis model assessment of insulin resistance (HOMA-IR) criteria. Mean differences in food intakes among BMI phenotypes were compared by using ANCOVA. Approximately one-half of obese women (white: 45%; black: 55%) as defined by MetS criteria and approximately one-quarter of obese women (white: 28%; black: 24%) defined on the basis of HOMA-IR values were metabolically healthy. In age-adjusted analyses, healthy obesity and normal weight as defined by both criteria were associated with lower intakes of sugar-sweetened beverages compared with abnormal obesity among both white and black women (P < 0.05). HOMA-IR-defined healthy obesity and normal weight were also associated with higher fruit and low-fat dairy intakes compared with abnormal obesity in white women (P < 0.05). Results were attenuated and became nonsignificant in multivariable-adjusted models that additionally adjusted for BMI, marital status, residential region, education, annual income, alcohol intake, multivitamin use, cigarette smoking status, physical activity, television viewing, high-sensitivity C-reactive protein, menopausal status, hormone therapy, and food intakes. Healthy obesity was not associated with a healthier diet. Prospective studies on relations of dietary patterns, which may be a better indicator of usual diet, with the phenotype would be beneficial. © 2014 American Society for Nutrition.

  19. Food Intake Does Not Differ between Obese Women Who Are Metabolically Healthy or Abnormal1234

    PubMed Central

    Kimokoti, Ruth W; Judd, Suzanne E; Shikany, James M; Newby, PK

    2014-01-01

    Background: Metabolically healthy obesity may confer lower risk of adverse health outcomes compared with abnormal obesity. Diet and race are postulated to influence the phenotype, but their roles and their interrelations on healthy obesity are unclear. Objective: We evaluated food intakes of metabolically healthy obese women in comparison to intakes of their metabolically healthy normal-weight and metabolically abnormal obese counterparts. Methods: This was a cross-sectional study in 6964 women of the REasons for Geographic And Racial Differences in Stroke (REGARDS) study. Participants were aged 45–98 y with a body mass index (BMI; kg/m2) ≥18.5 and free of cardiovascular diseases, diabetes, and cancer. Food intake was collected by using a food-frequency questionnaire. BMI phenotypes were defined by using metabolic syndrome (MetS) and homeostasis model assessment of insulin resistance (HOMA-IR) criteria. Mean differences in food intakes among BMI phenotypes were compared by using ANCOVA. Results: Approximately one-half of obese women (white: 45%; black: 55%) as defined by MetS criteria and approximately one-quarter of obese women (white: 28%; black: 24%) defined on the basis of HOMA-IR values were metabolically healthy. In age-adjusted analyses, healthy obesity and normal weight as defined by both criteria were associated with lower intakes of sugar-sweetened beverages compared with abnormal obesity among both white and black women (P < 0.05). HOMA-IR–defined healthy obesity and normal weight were also associated with higher fruit and low-fat dairy intakes compared with abnormal obesity in white women (P < 0.05). Results were attenuated and became nonsignificant in multivariable-adjusted models that additionally adjusted for BMI, marital status, residential region, education, annual income, alcohol intake, multivitamin use, cigarette smoking status, physical activity, television viewing, high-sensitivity C-reactive protein, menopausal status, hormone therapy, and food intakes. Conclusions: Healthy obesity was not associated with a healthier diet. Prospective studies on relations of dietary patterns, which may be a better indicator of usual diet, with the phenotype would be beneficial. PMID:25411036

  20. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  1. Estimation and model selection of semiparametric multivariate survival functions under general censorship

    PubMed Central

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2013-01-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286

  2. Shape model of the maxillary dental arch using Fourier descriptors with an application in the rehabilitation for edentulous patient.

    PubMed

    Rijal, Omar M; Abdullah, Norli A; Isa, Zakiah M; Noor, Norliza M; Tawfiq, Omar F

    2013-01-01

    The knowledge of teeth positions on the maxillary arch is useful in the rehabilitation of the edentulous patient. A combination of angular (θ), and linear (l) variables representing position of four teeth were initially proposed as the shape descriptor of the maxillary dental arch. Three categories of shape were established, each having a multivariate normal distribution. It may be argued that 4 selected teeth on the standardized digital images of the dental casts could be considered as insufficient with respect to representing shape. However, increasing the number of points would create problems with dimensions and proof of existence of the multivariate normal distribution is extremely difficult. This study investigates the ability of Fourier descriptors (FD) using all maxillary teeth to find alternative shape models. Eight FD terms were sufficient to represent 21 points on the arch. Using these 8 FD terms as an alternative shape descriptor, three categories of shape were verified, each category having the complex normal distribution.

  3. Some Integrated Squared Error Procedures for Multivariate Normal Data,

    DTIC Science & Technology

    1986-01-01

    a lnear regresmion or experimental design model). Our procedures have &lSO been usned wcelyOn non -linear models but we do not addres nan-lnear...of fit, outliers, influence functions, experimental design , cluster analysis, robustness 24L A =TO ACT (VCefme - pvre alli of magsy MW identif by...structured data such as multivariate experimental designs . Several illustrations are provided. * 0 %41 %-. 4.’. * " , -.--, ,. -,, ., -, ’v ’ , " ,,- ,, . -,-. . ., * . - tAma- t

  4. A Note on Asymptotic Joint Distribution of the Eigenvalues of a Noncentral Multivariate F Matrix.

    DTIC Science & Technology

    1984-11-01

    Krishnaiah (1982). Now, let us consider the samples drawn from the k multivariate normal popuiejons. Let (Xlt....Xpt) denote the mean vector of the t...to maltivariate problems. Sankh-ya, 4, 381-39(s. (71 KRISHNAIAH , P. R. (1982). Selection of variables in discrimlnant analysis. In Handbook of...Statistics, Volume 2 (P. R. Krishnaiah , editor), 805-820. North-Holland Publishing Company. 6. Unclassifie INSTRUCTIONS REPORT DOCUMENTATION PAGE

  5. Hypogammaglobulinemia in newly diagnosed chronic lymphocytic leukemia: Natural history, clinical correlates, and outcomes.

    PubMed

    Parikh, Sameer A; Leis, Jose F; Chaffee, Kari G; Call, Timothy G; Hanson, Curtis A; Ding, Wei; Chanan-Khan, Asher A; Bowen, Deborah; Conte, Michael; Schwager, Susan; Slager, Susan L; Van Dyke, Daniel L; Jelinek, Diane F; Kay, Neil E; Shanafelt, Tait D

    2015-09-01

    Although hypogammaglobulinemia is a well recognized complication in patients with chronic lymphocytic leukemia (CLL), its prevalence at the time of CLL diagnosis, and association with novel prognostic markers and clinical outcome is not well understood. All patients at the Mayo Clinic between January 1999 and July 2013 who had newly diagnosed CLL and had a baseline assessment of serum immunoglobulin G (IgG) were included. The relation between hypogammaglobulinemia at diagnosis and the novel prognostic parameters time to first treatment (TFT) and overall survival (OS) were evaluated. Of 1485 patients who met the eligibility criteria, 382 (26%) had hypogammaglobulinemia (median IgG, 624 mg/dL), whereas the remaining 1103 patients (74%) had normal serum IgG levels (median IgG, 1040 mg/dL). Patients who had hypogammaglobulinemia at diagnosis were more likely to have advanced Rai stage (III-IV; P = .001) and higher expression of CD49d (P < .001) compared with patients who had normal IgG levels. Although the median TFT for patients who had hypogammaglobulinemia was shorter compared with that for patients who had normal IgG levels (3.8 years vs 7.4 years; P < .001), on multivariable analysis, there was no difference in OS between these 2 groups (12.8 years vs 11.3 years, respectively; P = .73). Of 1103 patients who had CLL with normal IgG levels at diagnosis and who did not receive CLL therapy, the risk of acquired hypogammaglobulinemia was 11% at 5 years and 23% at 10 years. Hypogammaglobulinemia is present in 25% of patients with newly diagnosed CLL. Approximately 25% of patients who have CLL with normal IgG levels at diagnosis will subsequently develop hypogammaglobulinemia on long-term follow-up. The presence of hypogammaglobulinemia does not appear to impact overall survival. © 2015 American Cancer Society.

  6. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics

    PubMed Central

    Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.

    2015-01-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  7. Changes in the pharmacokinetics of digoxin in polyuria in streptozotocin-induced diabetic mice and lithium carbonate-treated mice.

    PubMed

    Ikarashi, Nobutomo; Kagami, Mai; Kobayashi, Yasushi; Ishii, Makoto; Toda, Takahiro; Ochiai, Wataru; Sugiyama, Kiyoshi

    2011-06-01

    In humans, digoxin is mainly eliminated through the kidneys unchanged, and renal clearance represents approximately 70% of the total clearance. In this study, we used the mouse models to examine digoxin pharmacokinetics in polyuria induced by diabetes mellitus and lithium carbonate (Li(2)CO(3)) administration, including mechanistic evaluation of the contribution of glomerular filtration, tubular secretion, and tubular reabsorption. After digoxin administration to streptozotocin (STZ)-induced diabetic mice, digoxin CL/F increased to approximately 2.2 times that in normal mice. After treatment with Li(2)CO(3) (0.2%) for 10 days, the CL/F increased approximately 1.1 times for normal mice and 1.6 times for STZ mice. Creatinine clearance (CLcr) and the renal mRNA expression levels of mdr1a did not differ significantly between the normal, STZ, and Li(2)CO(3)-treated mice. The urine volume of STZ mice was approximately 26 mL/day, 22 times that of normal mice. The urine volume of Li(2)CO(3)-treated mice increased approximately 7.3 times for normal mice and 2.3 times for STZ mice. These results suggest that the therapeutic effect of digoxin may be significantly reduced in the presence of polyuria either induced by diabetes mellitus or manifested as an adverse effect of Li(2)CO(3) in diabetic patients, along with increased urine volume.

  8. Universal portfolios generated by weakly stationary processes

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2014-12-01

    Recently, a universal portfolio generated by a set of independent Brownian motions where a finite number of past stock prices are weighted by the moments of the multivariate normal distribution is introduced and studied. The multivariate normal moments as polynomials in time consequently lead to a constant rebalanced portfolio depending on the drift coefficients of the Brownian motions. For a weakly stationary process, a different type of universal portfolio is proposed where the weights on the stock prices depend only on the time differences of the stock prices. An empirical study is conducted on the returns achieved by the universal portfolios generated by the Ornstein-Uhlenbeck process on selected stock-price data sets. Promising results are demonstrated for increasing the wealth of the investor by using the weakly-stationary-process-generated universal portfolios.

  9. Comparison of methods for assessing photoprotection against ultraviolet A in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaidbey, K.; Gange, R.W.

    Photoprotection against ultraviolet A (UVA) by three sunscreens was evaluated in humans, with erythema and pigmentation used as end points in normal skin and in skin sensitized with 8-methoxypsoralen and anthracene. The test sunscreens were Parsol 1789 (2%), Eusolex 8020 (2%), and oxybenzone (3%). UVA was obtained from two filtered xenon-arc sources. UVA protection factors were found to be significantly higher in sensitized skin compared with normal skin. Both Parsol and Eusolex provided better and comparable photoprotection (approximately 3.0) than oxybenzone (approximately 2.0) in sensitized skin, regardless of whether 8-methoxypsoralen or anthracene was used. In normal unsensitized skin, Parsol 1789more » and Eusolex 8020 were also comparable and provided slightly better photoprotection (approximately 1.8) than oxybenzone (approximately 1.4) when pigmentation was used as an end point. The three sunscreens, however, were similar in providing photoprotection against UVA-induced erythema. Protection factors obtained in artificially sensitized skin are probably not relevant to normal skin. It is concluded that pigmentation, either immediate or delayed, is a reproducible and useful end point for the routine assessment of photoprotection of normal skin against UVA.« less

  10. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  11. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Novel adipokines WISP1 and betatrophin in PCOS: relationship to AMH levels, atherogenic and metabolic profile.

    PubMed

    Sahin Ersoy, Gulcin; Altun Ensari, Tugba; Vatansever, Dogan; Emirdar, Volkan; Cevik, Ozge

    2017-02-01

    To determine the levels of WISP1 and betatrophin in normal weight and obese women with polycystic ovary syndrome (PCOS) and to assess their relationship with anti-Müllerian hormone (AMH) levels, atherogenic profile and metabolic parameters Methods: In this prospective cross-sectional study, the study group was composed of 49 normal weighed and 34 obese women with PCOS diagnosed based on the Rotterdam criteria; 36 normal weight and 26 obese age matched non-hyperandrogenemic women with regular menstrual cycle. Serum WISP1, betatrophin, homeostasis model assessment of insulin resistance (HOMA-IR) and AMH levels were evaluated. Univariate and multivariate analyses were performed between betatrophin, WISP1 levels and AMH levels, metabolic and atherogenic parameters. Serum WISP1 and betatrophin values were elevated in the PCOS group than in the control group. Moreover, serum WISP1 and betatrophin levels were higher in the obese PCOS subgroup than in normal weight and obese control subgroups. Multivariate analyses revealed that Body mass index, HOMA-IR, AMH independently and positively predicted WISP1 levels. Serum betatrophin level variability was explained by homocysteine, HOMA-IR and androstenedione levels. WISP1 and betatrophin may play a key role on the pathogenesis of PCOS.

  13. Near-infrared confocal micro-Raman spectroscopy combined with PCA-LDA multivariate analysis for detection of esophageal cancer

    NASA Astrophysics Data System (ADS)

    Chen, Long; Wang, Yue; Liu, Nenrong; Lin, Duo; Weng, Cuncheng; Zhang, Jixue; Zhu, Lihuan; Chen, Weisheng; Chen, Rong; Feng, Shangyuan

    2013-06-01

    The diagnostic capability of using tissue intrinsic micro-Raman signals to obtain biochemical information from human esophageal tissue is presented in this paper. Near-infrared micro-Raman spectroscopy combined with multivariate analysis was applied for discrimination of esophageal cancer tissue from normal tissue samples. Micro-Raman spectroscopy measurements were performed on 54 esophageal cancer tissues and 55 normal tissues in the 400-1750 cm-1 range. The mean Raman spectra showed significant differences between the two groups. Tentative assignments of the Raman bands in the measured tissue spectra suggested some changes in protein structure, a decrease in the relative amount of lactose, and increases in the percentages of tryptophan, collagen and phenylalanine content in esophageal cancer tissue as compared to those of a normal subject. The diagnostic algorithms based on principal component analysis (PCA) and linear discriminate analysis (LDA) achieved a diagnostic sensitivity of 87.0% and specificity of 70.9% for separating cancer from normal esophageal tissue samples. The result demonstrated that near-infrared micro-Raman spectroscopy combined with PCA-LDA analysis could be an effective and sensitive tool for identification of esophageal cancer.

  14. Logistic Approximation to the Normal: The KL Rationale

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2006-01-01

    A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…

  15. Relationship between body mass index and depressive symptoms: the "fat and jolly" hypothesis for the middle-aged and elderly in China.

    PubMed

    Zhang, Lin; Liu, Kun; Li, Hong; Li, Dan; Chen, Zhuo; Zhang, Li-Li; Guo, Lei-Lei

    2016-11-29

    Obesity has been identified as a worldwide epidemic. In China, the highest prevalence of obesity is observed in adults aged ≥45 years old. This study aimed to describe the association between BMI and depressive symptoms among a large representative sample of middle-aged and elderly in China. A longitudinal sample of the middle-aged and elderly (6,224 males and 6,883 females) who were interviewed in the 2011 China Health and Retirement Longitudinal Study was used. A multivariate logistic regression analysis was used to examine the effects of socio-demographic characteristics, lifestyle, activity status, health status, physical exercise and body weight on depressive symptoms. Approximately 6.94% of the males were underweight, 25.48% were overweight and 8.16% were obese. A higher prevalence of obesity was found among women, with 6.89% being underweight, 31.98% overweight and 14.28% obese. The underweight subjects were more likely to be depressed (odds ratio; OR = 1.30 and 1.19) compared with the normal weight people, respectively, whereas overweight and obese men and women were less likely to be depressed (overweight: OR = 0.76 and 0.80; obesity: OR = 0.64 and 0.65, respectively) than people of normal weight. Our data are consistent with the "fat and jolly" hypothesis being valid in both middle-aged and elderly men and women.

  16. Molecular Subgroup of Primary Prostate Cancer Presenting with Metastatic Biology.

    PubMed

    Walker, Steven M; Knight, Laura A; McCavigan, Andrena M; Logan, Gemma E; Berge, Viktor; Sherif, Amir; Pandha, Hardev; Warren, Anne Y; Davidson, Catherine; Uprichard, Adam; Blayney, Jaine K; Price, Bethanie; Jellema, Gera L; Steele, Christopher J; Svindland, Aud; McDade, Simon S; Eden, Christopher G; Foster, Chris; Mills, Ian G; Neal, David E; Mason, Malcolm D; Kay, Elaine W; Waugh, David J; Harkin, D Paul; Watson, R William; Clarke, Noel W; Kennedy, Richard D

    2017-10-01

    Approximately 4-25% of patients with early prostate cancer develop disease recurrence following radical prostatectomy. To identify a molecular subgroup of prostate cancers with metastatic potential at presentation resulting in a high risk of recurrence following radical prostatectomy. Unsupervised hierarchical clustering was performed using gene expression data from 70 primary resections, 31 metastatic lymph nodes, and 25 normal prostate samples. Independent assay validation was performed using 322 radical prostatectomy samples from four sites with a mean follow-up of 50.3 months. Molecular subgroups were identified using unsupervised hierarchical clustering. A partial least squares approach was used to generate a gene expression assay. Relationships with outcome (time to biochemical and metastatic recurrence) were analysed using multivariable Cox regression and log-rank analysis. A molecular subgroup of primary prostate cancer with biology similar to metastatic disease was identified. A 70-transcript signature (metastatic assay) was developed and independently validated in the radical prostatectomy samples. Metastatic assay positive patients had increased risk of biochemical recurrence (multivariable hazard ratio [HR] 1.62 [1.13-2.33]; p=0.0092) and metastatic recurrence (multivariable HR=3.20 [1.76-5.80]; p=0.0001). A combined model with Cancer of the Prostate Risk Assessment post surgical (CAPRA-S) identified patients at an increased risk of biochemical and metastatic recurrence superior to either model alone (HR=2.67 [1.90-3.75]; p<0.0001 and HR=7.53 [4.13-13.73]; p<0.0001, respectively). The retrospective nature of the study is acknowledged as a potential limitation. The metastatic assay may identify a molecular subgroup of primary prostate cancers with metastatic potential. The metastatic assay may improve the ability to detect patients at risk of metastatic recurrence following radical prostatectomy. The impact of adjuvant therapies should be assessed in this higher-risk population. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  17. An operator calculus for surface and volume modeling

    NASA Technical Reports Server (NTRS)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  18. Insulin Sensitivity Measured With Euglycemic Clamp Is Independently Associated With Glomerular Filtration Rate in a Community-Based Cohort

    PubMed Central

    Nerpin, Elisabet; Risérus, Ulf; Ingelsson, Erik; Sundström, Johan; Jobs, Magnus; Larsson, Anders; Basu, Samar; Ärnlöv, Johan

    2008-01-01

    OBJECTIVE—To investigate the association between insulin sensitivity and glomerular filtration rate (GFR) in the community, with prespecified subgroup analyses in normoglycemic individuals with normal GFR. RESEARCH DESIGN AND METHODS—We investigated the cross-sectional association between insulin sensitivity (M/I, assessed using euglycemic clamp) and cystatin C–based GFR in a community-based cohort of elderly men (Uppsala Longitudinal Study of Adult Men [ULSAM], n = 1,070). We also investigated whether insulin sensitivity predicted the incidence of renal dysfunction at a follow-up examination after 7 years. RESULTS—Insulin sensitivity was directly related to GFR (multivariable-adjusted regression coefficient for 1-unit higher M/I 1.19 [95% CI 0.69–1.68]; P < 0.001) after adjusting for age, glucometabolic variables (fasting plasma glucose, fasting plasma insulin, and 2-h glucose after an oral glucose tolerance test), cardiovascular risk factors (hypertension, dyslipidemia, and smoking), and lifestyle factors (BMI, physical activity, and consumption of tea, coffee, and alcohol). The positive multivariable-adjusted association between insulin sensitivity and GFR also remained statistically significant in participants with normal fasting plasma glucose, normal glucose tolerance, and normal GFR (n = 443; P < 0.02). In longitudinal analyses, higher insulin sensitivity at baseline was associated with lower risk of impaired renal function (GFR <50 ml/min per 1.73 m2) during follow-up independently of glucometabolic variables (multivariable-adjusted odds ratio for 1-unit higher of M/I 0.58 [95% CI 0.40–0.84]; P < 0.004). CONCLUSIONS—Our data suggest that impaired insulin sensitivity may be involved in the development of renal dysfunction at an early stage, before the onset of diabetes or prediabetic glucose elevations. Further studies are needed in order to establish causality. PMID:18509205

  19. Detection of cervical lesions by multivariate analysis of diffuse reflectance spectra: a clinical study.

    PubMed

    Prabitha, Vasumathi Gopala; Suchetha, Sambasivan; Jayanthi, Jayaraj Lalitha; Baiju, Kamalasanan Vijayakumary; Rema, Prabhakaran; Anuraj, Koyippurath; Mathews, Anita; Sebastian, Paul; Subhash, Narayanan

    2016-01-01

    Diffuse reflectance (DR) spectroscopy is a non-invasive, real-time, and cost-effective tool for early detection of malignant changes in squamous epithelial tissues. The present study aims to evaluate the diagnostic power of diffuse reflectance spectroscopy for non-invasive discrimination of cervical lesions in vivo. A clinical trial was carried out on 48 sites in 34 patients by recording DR spectra using a point-monitoring device with white light illumination. The acquired data were analyzed and classified using multivariate statistical analysis based on principal component analysis (PCA) and linear discriminant analysis (LDA). Diagnostic accuracies were validated using random number generators. The receiver operating characteristic (ROC) curves were plotted for evaluating the discriminating power of the proposed statistical technique. An algorithm was developed and used to classify non-diseased (normal) from diseased sites (abnormal) with a sensitivity of 72 % and specificity of 87 %. While low-grade squamous intraepithelial lesion (LSIL) could be discriminated from normal with a sensitivity of 56 % and specificity of 80 %, and high-grade squamous intraepithelial lesion (HSIL) from normal with a sensitivity of 89 % and specificity of 97 %, LSIL could be discriminated from HSIL with 100 % sensitivity and specificity. The areas under the ROC curves were 0.993 (95 % confidence interval (CI) 0.0 to 1) and 1 (95 % CI 1) for the discrimination of HSIL from normal and HSIL from LSIL, respectively. The results of the study show that DR spectroscopy could be used along with multivariate analytical techniques as a non-invasive technique to monitor cervical disease status in real time.

  20. Aspirin and the Risk of Colorectal Cancer in Relation to the Expression of 15-Hydroxyprostaglandin Dehydrogenase (15-PGDH, HPGD)

    PubMed Central

    Fink, Stephen P.; Yamauchi, Mai; Nishihara, Reiko; Jung, Seungyoun; Kuchiba, Aya; Wu, Kana; Cho, Eunyoung; Giovannucci, Edward; Fuchs, Charles S.; Ogino, Shuji; Markowitz, Sanford D.; Chan, Andrew T.

    2014-01-01

    Aspirin use reduces the risk of colorectal neoplasia, at least in part, through inhibition of prostaglandin-endoperoxide synthase 2 (PTGS2, cyclooxygenase 2)-related pathways. Hydroxyprostaglandin dehydrogenase 15-(NAD) (15-PGDH, HPGD) is downregulated in colorectal cancers and functions as a metabolic antagonist of PTGS2. We hypothesized that the effect of aspirin may be antagonized by low 15-PGDH expression in the normal colon. In the Nurses’ Health Study and the Health Professionals Follow-up Study, we collected data on aspirin use and other risk factors every two years and followed up participants for diagnoses of colorectal cancer. Duplication-method Cox proportional, multivariable-adjusted, cause-specific hazards regression for competing risks data was used to compute hazard ratios (HRs) for incident colorectal cancer according to 15-PGDH mRNA expression level measured in normal mucosa from colorectal cancer resections. Among 127,865 participants, we documented 270 colorectal cancer cases that developed during 3,166,880 person-years of follow-up and from which we could assess 15-PGDH expression. Compared with nonuse, regular aspirin use was associated with lower risk of colorectal cancer that developed within a background of colonic mucosa with high 15-PGDH expression (multivariable HR=0.49; 95% CI, 0.34–0.71), but not with low 15-PGDH expression (multivariable HR=0.90; 95% CI, 0.63–1.27) (P for heterogeneity=0.018). Regular aspirin use was associated with lower incidence of colorectal cancers arising in association with high 15-PGDH expression, but not with low 15-PGDH expression in normal colon mucosa. This suggests that 15-PGDH expression level in normal colon mucosa may serve as a biomarker which may predict stronger benefit from aspirin chemoprevention. PMID:24760190

  1. A Polyhedral Outer-approximation, Dynamic-discretization optimization solver, 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Rusell; Nagarajan, Harsha; Sundar, Kaarthik

    2017-09-25

    In this software, we implement an adaptive, multivariate partitioning algorithm for solving mixed-integer nonlinear programs (MINLP) to global optimality. The algorithm combines ideas that exploit the structure of convex relaxations to MINLPs and bound tightening procedures

  2. Reproductive Health Assessment of Female Elephants in North American Zoos and Association of Husbandry Practices with Reproductive Dysfunction in African Elephants (Loxodonta africana)

    PubMed Central

    Meehan, Cheryl L.; Hogan, Jennifer N.; Morfeld, Kari A.; Carlstead, Kathy

    2016-01-01

    As part of a multi-institutional study of zoo elephant welfare, we evaluated female elephants managed by zoos accredited by the Association of Zoos and Aquariums and applied epidemiological methods to determine what factors in the zoo environment are associated with reproductive problems, including ovarian acyclicity and hyperprolactinemia. Bi-weekly blood samples were collected from 95 African (Loxodonta africana) and 75 Asian (Elephas maximus) (8–55 years of age) elephants over a 12-month period for analysis of serum progestogens and prolactin. Females were categorized as normal cycling (regular 13- to 17-week cycles), irregular cycling (cycles longer or shorter than normal) or acyclic (baseline progestogens, <0.1 ng/ml throughout), and having Low/Normal (<14 or 18 ng/ml) or High (≥14 or 18 ng/ml) prolactin for Asian and African elephants, respectively. Rates of normal cycling, acyclicity and irregular cycling were 73.2, 22.5 and 4.2% for Asian, and 48.4, 37.9 and 13.7% for African elephants, respectively, all of which differed between species (P < 0.05). For African elephants, univariate assessment found that social isolation decreased and higher enrichment diversity increased the chance a female would cycle normally. The strongest multi-variable models included Age (positive) and Enrichment Diversity (negative) as important factors of acyclicity among African elephants. The Asian elephant data set was not robust enough to support multi-variable analyses of cyclicity status. Additionally, only 3% of Asian elephants were found to be hyperprolactinemic as compared to 28% of Africans, so predictive analyses of prolactin status were conducted on African elephants only. The strongest multi-variable model included Age (positive), Enrichment Diversity (negative), Alternate Feeding Methods (negative) and Social Group Contact (positive) as predictors of hyperprolactinemia. In summary, the incidence of ovarian cycle problems and hyperprolactinemia predominantly affects African elephants, and increases in social stability and feeding and enrichment diversity may have positive influences on hormone status. PMID:27416141

  3. Reproductive Health Assessment of Female Elephants in North American Zoos and Association of Husbandry Practices with Reproductive Dysfunction in African Elephants (Loxodonta africana).

    PubMed

    Brown, Janine L; Paris, Stephen; Prado-Oviedo, Natalia A; Meehan, Cheryl L; Hogan, Jennifer N; Morfeld, Kari A; Carlstead, Kathy

    2016-01-01

    As part of a multi-institutional study of zoo elephant welfare, we evaluated female elephants managed by zoos accredited by the Association of Zoos and Aquariums and applied epidemiological methods to determine what factors in the zoo environment are associated with reproductive problems, including ovarian acyclicity and hyperprolactinemia. Bi-weekly blood samples were collected from 95 African (Loxodonta africana) and 75 Asian (Elephas maximus) (8-55 years of age) elephants over a 12-month period for analysis of serum progestogens and prolactin. Females were categorized as normal cycling (regular 13- to 17-week cycles), irregular cycling (cycles longer or shorter than normal) or acyclic (baseline progestogens, <0.1 ng/ml throughout), and having Low/Normal (<14 or 18 ng/ml) or High (≥14 or 18 ng/ml) prolactin for Asian and African elephants, respectively. Rates of normal cycling, acyclicity and irregular cycling were 73.2, 22.5 and 4.2% for Asian, and 48.4, 37.9 and 13.7% for African elephants, respectively, all of which differed between species (P < 0.05). For African elephants, univariate assessment found that social isolation decreased and higher enrichment diversity increased the chance a female would cycle normally. The strongest multi-variable models included Age (positive) and Enrichment Diversity (negative) as important factors of acyclicity among African elephants. The Asian elephant data set was not robust enough to support multi-variable analyses of cyclicity status. Additionally, only 3% of Asian elephants were found to be hyperprolactinemic as compared to 28% of Africans, so predictive analyses of prolactin status were conducted on African elephants only. The strongest multi-variable model included Age (positive), Enrichment Diversity (negative), Alternate Feeding Methods (negative) and Social Group Contact (positive) as predictors of hyperprolactinemia. In summary, the incidence of ovarian cycle problems and hyperprolactinemia predominantly affects African elephants, and increases in social stability and feeding and enrichment diversity may have positive influences on hormone status.

  4. Multivariate analysis for scanning tunneling spectroscopy data

    NASA Astrophysics Data System (ADS)

    Yamanishi, Junsuke; Iwase, Shigeru; Ishida, Nobuyuki; Fujita, Daisuke

    2018-01-01

    We applied principal component analysis (PCA) to two-dimensional tunneling spectroscopy (2DTS) data obtained on a Si(111)-(7 × 7) surface to explore the effectiveness of multivariate analysis for interpreting 2DTS data. We demonstrated that several components that originated mainly from specific atoms at the Si(111)-(7 × 7) surface can be extracted by PCA. Furthermore, we showed that hidden components in the tunneling spectra can be decomposed (peak separation), which is difficult to achieve with normal 2DTS analysis without the support of theoretical calculations. Our analysis showed that multivariate analysis can be an additional powerful way to analyze 2DTS data and extract hidden information from a large amount of spectroscopic data.

  5. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  6. Approximate analysis for repeated eigenvalue problems with applications to controls-structure integrated design

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Hou, Gene J. W.

    1994-01-01

    A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.

  7. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  8. Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models

    ERIC Educational Resources Information Center

    Chun, So Yeon; Shapiro, Alexander

    2009-01-01

    The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…

  9. A Comparison of the Influences of Verbal-Successive and Spatial-Simultaneous Factors on Achieving Readers in Fourth and Fifth Grade: A Multivariate Correlational Study.

    ERIC Educational Resources Information Center

    Solan, Harold A.

    1987-01-01

    This study involving 38 normally achieving fourth and fifth grade children confirmed previous studies indicating that both spatial-simultaneous (in which perceived stimuli are totally available at one point in time) and verbal-successive (information is presented in serial order) cognitive processing are important in normal learning. (DB)

  10. Is the ML Chi-Square Ever Robust to Nonnormality? A Cautionary Note with Missing Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2008-01-01

    Normal theory maximum likelihood (ML) is by far the most popular estimation and testing method used in structural equation modeling (SEM), and it is the default in most SEM programs. Even though this approach assumes multivariate normality of the data, its use can be justified on the grounds that it is fairly robust to the violations of the…

  11. Variability in monthly serum bicarbonate measures in hemodialysis patients: a cohort study.

    PubMed

    Patel, Ravi; Paredes, William; Hall, Charles B; Nader, Mark A; Sapkota, Deepak; Folkert, Vaughn W; Abramowitz, Matthew K

    2015-12-21

    Some nephrologists have advocated an individualized approach to the prescription of bicarbonate hemodialysis. However, the utility of monthly serum bicarbonate levels for guiding and evaluating such treatment decisions has not been evaluated. We sought to define the variability of these measurements and to determine factors that are associated with month-to-month variability in pre-dialysis serum bicarbonate. We examined the monthly variability in serum bicarbonate measurements among 181 hemodialysis patients admitted to a free-standing dialysis unit in the Bronx, NY from 1/1/2008-6/30/2012. All patients were treated with a uniform bicarbonate dialysis prescription (bicarbonate 35 mEq/L, acetate 8 mEq/L). Pre-dialysis serum bicarbonate values were obtained from monthly laboratory reports. Month-to-month variability was defined using a rolling measurement for each time point. Only 34 % of high serum bicarbonate values (>26 mEq/L) remained high in the subsequent month, whereas 60 % converted to normal (22-26 mEq/L). Of all low values (<22 mEq/L), 41 % were normal the following month, while 58 % remained low. Using the mean 3-month bicarbonate, only 29 % of high values remained high in the next 3-month period. In multivariable-adjusted longitudinal models, both low and high serum bicarbonate values were associated with greater variability than were normal values (β = 0.12 (95 % CI 0.09-0.15) and 0.24 (0.18 to 0.29) respectively). Variability decreased with time, and was significantly associated with age, phosphate binder use, serum creatinine, potassium, and normalized protein catabolic rate. Monthly pre-dialysis serum bicarbonate levels are highly variable. Even if a clinician takes no action, approximately 50 % of bicarbonate values outside a normal range of 22-26 mEq/L will return to normal in the subsequent month. The decision to change the bicarbonate dialysis prescription should not be based on a single bicarbonate value, and even a 3-month mean may be insufficient.

  12. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  13. Multivariate flood risk assessment: reinsurance perspective

    NASA Astrophysics Data System (ADS)

    Ghizzoni, Tatiana; Ellenrieder, Tobias

    2013-04-01

    For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.

  14. Multivariate statistical process control (MSPC) using Raman spectroscopy for in-line culture cell monitoring considering time-varying batches synchronized with correlation optimized warping (COW).

    PubMed

    Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic

    2017-02-01

    Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Discrete factor approximations in simultaneous equation models: estimating the impact of a dummy endogenous variable on a continuous outcome.

    PubMed

    Mroz, T A

    1999-10-01

    This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.

  16. Fast Detection of Copper Content in Rice by Laser-Induced Breakdown Spectroscopy with Uni- and Multivariate Analysis.

    PubMed

    Liu, Fei; Ye, Lanhan; Peng, Jiyu; Song, Kunlin; Shen, Tingting; Zhang, Chu; He, Yong

    2018-02-27

    Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R 2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where R c 2 and R p 2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice.

  17. Fast Detection of Copper Content in Rice by Laser-Induced Breakdown Spectroscopy with Uni- and Multivariate Analysis

    PubMed Central

    Ye, Lanhan; Song, Kunlin; Shen, Tingting

    2018-01-01

    Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where Rc2 and Rp2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice. PMID:29495445

  18. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  19. 75 FR 2129 - Moriah Hydro Corporation; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-14

    ...-high, containing approximately 100 pump/turbine/ generator units having a total installed capacity of...,400 acre- feet at normal water surface elevation of +1,000 feet Project Datum; (2) a lower reservoir..., with a surface area of about 50 acres and volume of approximately 2,400 acre-feet at normal water...

  20. Predictive 5-Year Survivorship Model of Cystic Fibrosis

    PubMed Central

    Liou, Theodore G.; Adler, Frederick R.; FitzSimmons, Stacey C.; Cahill, Barbara C.; Hibbs, Jonathan R.; Marshall, Bruce C.

    2007-01-01

    The objective of this study was to create a 5-year survivorship model to identify key clinical features of cystic fibrosis. Such a model could help researchers and clinicians to evaluate therapies, improve the design of prospective studies, monitor practice patterns, counsel individual patients, and determine the best candidates for lung transplantation. The authors used information from the Cystic Fibrosis Foundation Patient Registry (CFFPR), which has collected longitudinal data on approximately 90% of cystic fibrosis patients diagnosed in the United States since 1986. They developed multivariate logistic regression models by using data on 5,820 patients randomly selected from 11,630 in the CFFPR in 1993. Models were tested for goodness of fit and were validated for the remaining 5,810 patients for 1993. The validated 5-year survivorship model included age, forced expiratory volume in 1 second as a percentage of predicted normal, gender, weight-for-age z score, pancreatic sufficiency, diabetes mellitus, Staphylococcus aureus infection, Burkerholderia cepacia infection, and annual number of acute pulmonary exacerbations. The model provides insights into the complex nature of cystic fibrosis and supplies a rigorous tool for clinical practice and research. PMID:11207152

  1. Can biomechanical variables predict improvement in crouch gait?

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.; Schwartz, Michael H.

    2011-01-01

    Many patients respond positively to treatments for crouch gait, yet surgical outcomes are inconsistent and unpredictable. In this study, we developed a multivariable regression model to determine if biomechanical variables and other subject characteristics measured during a physical exam and gait analysis can predict which subjects with crouch gait will demonstrate improved knee kinematics on a follow-up gait analysis. We formulated the model and tested its performance by retrospectively analyzing 353 limbs of subjects who walked with crouch gait. The regression model was able to predict which subjects would demonstrate ‘improved’ and ‘unimproved’ knee kinematics with over 70% accuracy, and was able to explain approximately 49% of the variance in subjects’ change in knee flexion between gait analyses. We found that improvement in stance phase knee flexion was positively associated with three variables that were drawn from knowledge about the biomechanical contributors to crouch gait: i) adequate hamstrings lengths and velocities, possibly achieved via hamstrings lengthening surgery, ii) normal tibial torsion, possibly achieved via tibial derotation osteotomy, and iii) sufficient muscle strength. PMID:21616666

  2. Chronic pruritus: evaluation of patient needs and treatment goals with a special regard to differences according to pruritus classification and sex.

    PubMed

    Steinke, S; Bruland, P; Blome, C; Osada, N; Dugas, M; Fritz, F; Augustin, M; Ständer, S

    2017-02-01

    Chronic pruritus (CP) is present in approximately one-third of all dermatological patients. Diagnostics and treatment are challenging and impair patients' quality of life. To analyse therapeutic needs in terms of the importance of treatment goals in a large sample of patients with CP. Routine data of 2747 patients with CP were analysed with descriptive methods and significance tests (univariate and multivariate variance analyses). The importance of 27 need items was measured using the Patient Needs Questionnaire of the Patient Benefit Index. The most important needs were to find a clear diagnosis and treatment, to no longer experience itching and to have confidence in the therapy, which were quite or very important to > 90% of the patients. The least important goals concerned a normal working or sex life. Nine needs related mostly to disease and psychological symptoms, and some social needs differed in importance between sexes (P ≤ 0·05). Patients with pruritus on inflamed skin or with chronic scratch lesions judged more than half of all needs as more important than did patients with pruritus on noninflamed skin (P ≤ 0·05). In the multivariate model, age, pruritus intensity and quality of life had a significant effect on the importance of therapeutic needs besides sex and pruritus classification. Patients with CP present high levels of various therapeutic needs with differences by sex and clinical phenotype. The most important needs can be addressed through medical activities such as appropriate itch medication and a trustful doctor-patient relationship. © 2016 British Association of Dermatologists.

  3. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    PubMed

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  4. The Natural History of Subclinical Hyperthyroidism in Graves' Disease: The Rule of Thirds.

    PubMed

    Zhyzhneuskaya, Sviatlana; Addison, Caroline; Tsatlidis, Vasileios; Weaver, Jolanta U; Razvi, Salman

    2016-06-01

    There is little information regarding the natural history of subclinical hyperthyroidism (SH) due to Graves' disease (GD). A prospective analysis was conducted of patients with SH due to GD between 2007 and 2013 with at least 12 months of follow-up. SH was diagnosed if serum thyrotropin (TSH) was below the laboratory reference range (0.4-4.0 mIU/L) and when thyroid hormones were normal. GD was confirmed by either a raised TSH receptor antibody (TRAb) level or uniform uptake on Technetium scan. Forty-four patients (89% female, 16% current smokers, and 5% with active Graves' orbitopathy) were diagnosed with SH due to GD. Over the follow-up period (median 32 months), approximately one third (34%) of the cohort progressed to overt hyperthyroidism, one third (34%) normalized their thyroid function, slightly less than one third (30%) remained in the SH state, while one person became hypothyroid. Multivariate regression analysis showed that older age and positive antithyroid peroxidase (TPO) antibody status had a positive association with risk of progression to overt hyperthyroidism, with hazard ratios of 1.06 ([confidence interval (CI) 1.02-1.10], p < 0.01) per year and 10.15 ([CI 1.83-56.23], p < 0.01), respectively, independent of other risk factors including, smoking, TRAb levels at diagnosis, and sex. A third each of patients with SH due to GD progress, normalize, or remain in the SH state. Older people and those with positive anti-TPO antibodies have a higher risk of progression of the disease. These novel data need to be verified and confirmed in larger cohorts and over longer periods of follow-up.

  5. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  6. Subset Selection Procedures: A Review and an Assessment

    DTIC Science & Technology

    1984-02-01

    distance function (Alam and Rizvi, 1966; Gupta, 1966; Gupta and Studden, 1970), generalized variance ( Gnanadesikan and Gupta, 1970), and multiple... Gnanadesikan (1966) considered a location type procedure based on sample component means. Except in the case of bivariate normal, only a lower bound of the...Frischtak, 1973; Gnanadesikan , 1966) for ranking multivariate normal populations but the results in these cases are very limited in scope or are asymptotic

  7. Combining Frequency Doubling Technology Perimetry and Scanning Laser Polarimetry for Glaucoma Detection.

    PubMed

    Mwanza, Jean-Claude; Warren, Joshua L; Hochberg, Jessica T; Budenz, Donald L; Chang, Robert T; Ramulu, Pradeep Y

    2015-01-01

    To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. One hundred ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike's information criterion (AIC), and prediction confidence interval lengths. For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDx-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT×NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single-variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAP-FDT, and interaction GDx-TSNIT×NAP-FDT consistently provided better discriminating abilities for detecting early, moderate, and severe glaucoma than the best single-variable models. The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDx-TSNIT×NAP-FDT provides the best glaucoma prediction compared with all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared with using GDx or FDT alone.

  8. Bladder cancer diagnosis during cystoscopy using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Grimbergen, M. C. M.; van Swol, C. F. P.; Draga, R. O. P.; van Diest, P.; Verdaasdonk, R. M.; Stone, N.; Bosch, J. H. L. R.

    2009-02-01

    Raman spectroscopy is an optical technique that can be used to obtain specific molecular information of biological tissues. It has been used successfully to differentiate normal and pre-malignant tissue in many organs. The goal of this study is to determine the possibility to distinguish normal tissue from bladder cancer using this system. The endoscopic Raman system consists of a 6 Fr endoscopic probe connected to a 785nm diode laser and a spectral recording system. A total of 107 tissue samples were obtained from 54 patients with known bladder cancer during transurethral tumor resection. Immediately after surgical removal the samples were placed under the Raman probe and spectra were collected and stored for further analysis. The collected spectra were analyzed using multivariate statistical methods. In total 2949 Raman spectra were recorded ex vivo from cold cup biopsy samples with 2 seconds integration time. A multivariate algorithm allowed differentiation of normal and malignant tissue with a sensitivity and specificity of 78,5% and 78,9% respectively. The results show the possibility of discerning normal from malignant bladder tissue by means of Raman spectroscopy using a small fiber based system. Despite the low number of samples the results indicate that it might be possible to use this technique to grade identified bladder wall lesions during endoscopy.

  9. Accumulation risk assessment for the flooding hazard

    NASA Astrophysics Data System (ADS)

    Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto

    2010-05-01

    One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.

  10. Multivariate Protein Signatures of Pre-Clinical Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative (ADNI) Plasma Proteome Dataset

    PubMed Central

    Johnstone, Daniel; Milward, Elizabeth A.; Berretta, Regina; Moscato, Pablo

    2012-01-01

    Background Recent Alzheimer's disease (AD) research has focused on finding biomarkers to identify disease at the pre-clinical stage of mild cognitive impairment (MCI), allowing treatment to be initiated before irreversible damage occurs. Many studies have examined brain imaging or cerebrospinal fluid but there is also growing interest in blood biomarkers. The Alzheimer's Disease Neuroimaging Initiative (ADNI) has generated data on 190 plasma analytes in 566 individuals with MCI, AD or normal cognition. We conducted independent analyses of this dataset to identify plasma protein signatures predicting pre-clinical AD. Methods and Findings We focused on identifying signatures that discriminate cognitively normal controls (n = 54) from individuals with MCI who subsequently progress to AD (n = 163). Based on p value, apolipoprotein E (APOE) showed the strongest difference between these groups (p = 2.3×10−13). We applied a multivariate approach based on combinatorial optimization ((α,β)-k Feature Set Selection), which retains information about individual participants and maintains the context of interrelationships between different analytes, to identify the optimal set of analytes (signature) to discriminate these two groups. We identified 11-analyte signatures achieving values of sensitivity and specificity between 65% and 86% for both MCI and AD groups, depending on whether APOE was included and other factors. Classification accuracy was improved by considering “meta-features,” representing the difference in relative abundance of two analytes, with an 8-meta-feature signature consistently achieving sensitivity and specificity both over 85%. Generating signatures based on longitudinal rather than cross-sectional data further improved classification accuracy, returning sensitivities and specificities of approximately 90%. Conclusions Applying these novel analysis approaches to the powerful and well-characterized ADNI dataset has identified sets of plasma biomarkers for pre-clinical AD. While studies of independent test sets are required to validate the signatures, these analyses provide a starting point for developing a cost-effective and minimally invasive test capable of diagnosing AD in its pre-clinical stages. PMID:22485168

  11. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.

  12. Braking System Integration in Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-05-01

    An optimal braking system for Dual Mode is a complex product of vast number of multivariate, interdependent parameters that encompass on-guideway and off-guideway operation as well as normal and emergency braking. : Details of, and interralations amo...

  13. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions

    PubMed Central

    Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-01-01

    ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679

  14. Predicting major element mineral/melt equilibria - A statistical approach

    NASA Technical Reports Server (NTRS)

    Hostetler, C. J.; Drake, M. J.

    1980-01-01

    Empirical equations have been developed for calculating the mole fractions of NaO0.5, MgO, AlO1.5, SiO2, KO0.5, CaO, TiO2, and FeO in a solid phase of initially unknown identity given only the composition of the coexisting silicate melt. The approach involves a linear multivariate regression analysis in which solid composition is expressed as a Taylor series expansion of the liquid compositions. An internally consistent precision of approximately 0.94 is obtained, that is, the nature of the liquidus phase in the input data set can be correctly predicted for approximately 94% of the entries. The composition of the liquidus phase may be calculated to better than 5 mol % absolute. An important feature of this 'generalized solid' model is its reversibility; that is, the dependent and independent variables in the linear multivariate regression may be inverted to permit prediction of the composition of a silicate liquid produced by equilibrium partial melting of a polymineralic source assemblage.

  15. Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis

    2013-06-01

    We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less

  16. A power analysis for multivariate tests of temporal trend in species composition.

    PubMed

    Irvine, Kathryn M; Dinger, Eric C; Sarr, Daniel

    2011-10-01

    Long-term monitoring programs emphasize power analysis as a tool to determine the sampling effort necessary to effectively document ecologically significant changes in ecosystems. Programs that monitor entire multispecies assemblages require a method for determining the power of multivariate statistical models to detect trend. We provide a method to simulate presence-absence species assemblage data that are consistent with increasing or decreasing directional change in species composition within multiple sites. This step is the foundation for using Monte Carlo methods to approximate the power of any multivariate method for detecting temporal trends. We focus on comparing the power of the Mantel test, permutational multivariate analysis of variance, and constrained analysis of principal coordinates. We find that the power of the various methods we investigate is sensitive to the number of species in the community, univariate species patterns, and the number of sites sampled over time. For increasing directional change scenarios, constrained analysis of principal coordinates was as or more powerful than permutational multivariate analysis of variance, the Mantel test was the least powerful. However, in our investigation of decreasing directional change, the Mantel test was typically as or more powerful than the other models.

  17. Arm structure in normal spiral galaxies, 1: Multivariate data for 492 galaxies

    NASA Technical Reports Server (NTRS)

    Magri, Christopher

    1994-01-01

    Multivariate data have been collected as part of an effort to develop a new classification system for spiral galaxies, one which is not necessarily based on subjective morphological properties. A sample of 492 moderately bright northern Sa and Sc spirals was chosen for future statistical analysis. New observations were made at 20 and 21 cm; the latter data are described in detail here. Infrared Astronomy Satellite (IRAS) fluxes were obtained from archival data. Finally, new estimates of arm pattern radomness and of local environmental harshness were compiled for most sample objects.

  18. Normalization and Implementation of Three Gravitational Acceleration Models

    NASA Technical Reports Server (NTRS)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  19. Analysis of the dynamic response of a supersonic inlet to flow-field perturbations upstream of the normal shock

    NASA Technical Reports Server (NTRS)

    Cole, G. L.; Willoh, R. G.

    1975-01-01

    A linearized mathematical analysis is presented for determining the response of normal shock position and subsonic duct pressures to flow-field perturbations upstream of the normal shock in mixed-compression supersonic inlets. The inlet duct cross-sectional area variation is approximated by constant-area sections; this approximation results in one-dimensional wave equations. A movable normal shock separates the supersonic and subsonic flow regions, and a choked exit is assumed for the inlet exit condition. The analysis leads to a closed-form matrix solution for the shock position and pressure transfer functions. Analytical frequency response results are compared with experimental data and a method of characteristics solution.

  20. Prevalence, Determinants, and Clinical Significance of Masked Hypertension in a Population-Based Sample of African Americans: The Jackson Heart Study.

    PubMed

    Diaz, Keith M; Veerabhadrappa, Praveen; Brown, Michael D; Whited, Matthew C; Dubbert, Patricia M; Hickson, DeMarc A

    2015-07-01

    The disproportionate rates of cardiovascular disease in African Americans may, in part, be due to suboptimal assessment of blood pressure (BP) with clinic BP measurements alone. To date, however, the prevalence of masked hypertension in African Americans has not been fully delineated. The purpose of this study was to evaluate masked hypertension prevalence in a large population-based sample of African Americans and examine its determinants and association with indices of target organ damage (TOD). Clinic and 24-hour ambulatory BP monitoring were conducted in 972 African Americans enrolled in the Jackson Heart Study. Common carotid artery intima-media thickness, left ventricular mass index, and the urinary albumin:creatinine excretion ratio were evaluated as indices of TOD. Masked hypertension prevalence was 25.9% in the overall sample and 34.4% in participants with normal clinic BP. All indices of TOD were significantly higher in masked hypertensives compared to sustained normotensives and were similar between masked hypertensives and sustained hypertensives. Male gender, smoking, diabetes, and antihypertensive medication use were independent determinants of masked hypertension in multivariate analyses. In this population-based cohort of African Americans, approximately one-third of participants with presumably normal clinic BP had masked hypertension when BP was assessed in their daily environment. Masked hypertension was accompanied by a greater degree of TOD in this cohort. © American Journal of Hypertension, Ltd 2014. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Body mass index and injury risk among US children 9-15 years old in motor vehicle crashes.

    PubMed

    Pollack, K M; Xie, D; Arbogast, K B; Durbin, D R

    2008-12-01

    To determine the relationship between body mass index (BMI) and injury risk among US children in motor vehicle crashes. Cross-sectional study using data from the Partners for Child Passenger Safety study, a child-focused crash surveillance system. A probability sample of children, 9-15 years of age, involved in crashes in parent-operated vehicles between 1 December 2000 and 31 December 2006. The odds ratio of Abbreviated Injury Severity (AIS) 2+ injuries (overall and body region specific) by BMI category: underweight, normal, overweight, and obese. The study sample included 3232 children in 2873 vehicles, representing a population estimate of 54 616 children in 49 037 vehicles. Approximately 15% (n = 502) sustained an AIS 2+ injury to any body region; 34% of the children were overweight or obese. There was no overall increase in injury risk by BMI; however, body region differences were found. In multivariate logistic regression, compared with normal weight children, the odds of sustaining an AIS 2+ injury to the extremities for overweight and obese children was 2.64 (95% CI 1.64 to 4.77) and 2.54 (95% CI 1.15 to 5.59), respectively. Although overweight and obese children are not at increased overall risk of injury, they are at increased risk of injury to the lower and upper extremities. This increased risk may be due to a combination of physiology, biomechanical forces, and vehicle design.

  2. On Recruiting: A Multivariate Analysis of Marine Corps Recruiters and the Market

    DTIC Science & Technology

    Three recommendations result from this study. The quantitative recommendation developed in this thesis is to add approximately three missioned...canvassing recruiters per Recruiting Station, or 144 total, where the marginal cost of the 1,400 potentially gained contracts is the most economical manpower

  3. MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION

    EPA Science Inventory

    Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...

  4. Beneficial effects of voluntary wheel running on the properties of dystrophic mouse muscle.

    PubMed

    Hayes, A; Williams, D A

    1996-02-01

    Effects of voluntary exercise on the isometric contractile, fatigue, and histochemical properties of hindlimb dystrophic (mdx and 129ReJ dy/dy) skeletal muscles were investigated. Mice were allowed free access to a voluntary running wheel at 4 wk of age for a duration of 16 (mdx) or 5 (dy/dy) wk. Running performance of mdx mice (approximately 4 km/day at 1.6 km/h) was inferior to normal mice (approximately 6.5 km/day at 2.1 km/h). However, exercise improved the force output (approximately 15%) and the fatigue resistance of both C57BL/10 and mdx soleus muscles. These changes coincided with increased proportions of smaller type I fibers and decreased proportions of larger type IIa fibers in the mdx soleus. The extensor digitorum longus of mdx, but not of normal, mice also exhibited improved resistance to fatigue and conversion towards oxidative fiber types. The dy/dy animals were capable of exercising, yet ran significantly less than normal animals (approximately 0.5 km/day). Despite this, running increased the force output of the plantaris muscle (approximately 50%). Taken together, the results showed that exercise can have beneficial effects on dystrophic skeletal muscles.

  5. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  6. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  7. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  8. Coupled-mode theory and Fano resonances in guided-mode resonant gratings: the conical diffraction mounting.

    PubMed

    Bykov, Dmitry A; Doskolovich, Leonid L; Soifer, Victor A

    2017-01-23

    We study resonances of guided-mode resonant gratings in conical mounting. By developing 2D time-dependent coupled-mode theory we obtain simple approximations of the transmission and reflection coefficients. Being functions of the incident light's frequency and in-plane wave vector components, the obtained approximations can be considered as multi-variable generalizations of the Fano line shape. We show that the approximations are in good agreement with the rigorously calculated transmission and reflection spectra. We use the developed theory to investigate angular tolerances of the considered structures and to obtain mode excitation conditions. In particular, we obtain the cross-polarization mode excitation conditions in the case of conical mounting.

  9. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  10. Robust multivariate nonparametric tests for detection of two-sample location shift in clinical trials

    PubMed Central

    Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo

    2018-01-01

    This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555

  11. Multivariate frequency domain analysis of protein dynamics

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Fuchigami, Sotaro; Kidera, Akinori

    2009-03-01

    Multivariate frequency domain analysis (MFDA) is proposed to characterize collective vibrational dynamics of protein obtained by a molecular dynamics (MD) simulation. MFDA performs principal component analysis (PCA) for a bandpass filtered multivariate time series using the multitaper method of spectral estimation. By applying MFDA to MD trajectories of bovine pancreatic trypsin inhibitor, we determined the collective vibrational modes in the frequency domain, which were identified by their vibrational frequencies and eigenvectors. At near zero temperature, the vibrational modes determined by MFDA agreed well with those calculated by normal mode analysis. At 300 K, the vibrational modes exhibited characteristic features that were considerably different from the principal modes of the static distribution given by the standard PCA. The influences of aqueous environments were discussed based on two different sets of vibrational modes, one derived from a MD simulation in water and the other from a simulation in vacuum. Using the varimax rotation, an algorithm of the multivariate statistical analysis, the representative orthogonal set of eigenmodes was determined at each vibrational frequency.

  12. Multivariate Models of Men's and Women's Partner Aggression

    ERIC Educational Resources Information Center

    O'Leary, K. Daniel; Smith Slep, Amy M.; O'Leary, Susan G.

    2007-01-01

    This exploratory study was designed to address how multiple factors drawn from varying focal models and ecological levels of influence might operate relative to each other to predict partner aggression, using data from 453 representatively sampled couples. The resulting cross-validated models predicted approximately 50% of the variance in men's…

  13. Combining Frequency Doubling Technology Perimetry and Scanning Laser Polarimetry for Glaucoma Detection

    PubMed Central

    Mwanza, Jean-Claude; Warren, Joshua L.; Hochberg, Jessica T.; Budenz, Donald L.; Chang, Robert T.; Ramulu, Pradeep Y.

    2014-01-01

    Purpose To determine the ability of frequency doubling technology (FDT) and scanning laser polarimetry with variable corneal compensation (GDx-VCC) to detect glaucoma when used individually and in combination. Methods One hundred and ten normal and 114 glaucomatous subjects were tested with FDT C-20-5 screening protocol and the GDx-VCC. The discriminating ability was tested for each device individually and for both devices combined using GDx-NFI, GDx-TSNIT, number of missed points of FDT, and normal or abnormal FDT. Measures of discrimination included sensitivity, specificity, area under the curve (AUC), Akaike’s information criterion (AIC), and prediction confidence interval lengths (PIL). Results For detecting glaucoma regardless of severity, the multivariable model resulting from the combination of GDX-TSNIT, number of abnormal points on FDT (NAP-FDT), and the interaction GDx-TSNIT * NAP-FDT (AIC: 88.28, AUC: 0.959, sensitivity: 94.6%, specificity: 89.5%) outperformed the best single variable model provided by GDx-NFI (AIC: 120.88, AUC: 0.914, sensitivity: 87.8%, specificity: 84.2%). The multivariable model combining GDx-TSNIT, NAPFDT, and interaction GDx-TSNIT*NAP-FDT consistently provided better discriminating abilities for detecting early, moderate and severe glaucoma than the best single variable models. Conclusions The multivariable model including GDx-TSNIT, NAP-FDT, and the interaction GDX-TSNIT * NAP-FDT provides the best glaucoma prediction compared to all other multivariable and univariable models. Combining the FDT C-20-5 screening protocol and GDx-VCC improves glaucoma detection compared to using GDx or FDT alone. PMID:24777046

  14. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  15. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    NASA Astrophysics Data System (ADS)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  16. Multivariable control altitude demonstration on the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Dehoff, R. L.; Hackney, R. D.

    1979-01-01

    The F100 Multivariable control synthesis (MVCS) program, was aimed at demonstrating the benefits of LGR synthesis theory in the design of a multivariable engine control system for operation throughout the flight envelope. The advantages of such procedures include: (1) enhanced performance from cross-coupled controls, (2) maximum use of engine variable geometry, and (3) a systematic design procedure that can be applied efficiently to new engine systems. The control system designed, under the MVCS program, for the Pratt & Whitney F100 turbofan engine is described. Basic components of the control include: (1) a reference value generator for deriving a desired equilibrium state and an approximate control vector, (2) a transition model to produce compatible reference point trajectories during gross transients, (3) gain schedules for producing feedback terms appropriate to the flight condition, and (4) integral switching logic to produce acceptable steady-state performance without engine operating limit exceedance.

  17. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  18. Robust Approximations to the Non-Null Distribution of the Product Moment Correlation Coefficient I: The Phi Coefficient.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Meyers, Sarah A.

    Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…

  19. Incidence and prognostic value of serotonin secretion in pancreatic neuroendocrine tumours.

    PubMed

    Zandee, Wouter T; van Adrichem, Roxanne C; Kamp, Kimberly; Feelders, Richard A; van Velthuysen, Marie-Louise F; de Herder, Wouter W

    2017-08-01

    Serotonin secretion occurs in approximately 1%-4% of patients with a pancreatic neuroendocrine tumour (PNET), but the incidence is not well defined. The aim of this study was to determine the incidence of serotonin secretion with and without carcinoid syndrome and the prognostic value for overall survival (OS). Data were collected from 255 patients with a PNET if 24-hours urinary 5-hydroxyindoleacetic acid excretion (5-HIAA) was assessed. Patients were diagnosed with serotonin secretion if 24-hours urinary 5-HIAA excretion was more than 3× the upper limit of normal (ULN) of 50 μmol/24 hours during follow-up. The effect of serotonin secretion on OS was estimated with uni- and multivariate analyses using a Cox regression. Two (0.8%) patients were diagnosed with carcinoid syndrome, and another 20 (7.8%) had a serotonin-secreting PNET without symptoms. These patients mostly had ENETS stage IV disease with high chromogranin A (CgA). Serotonin secretion was a negative prognostic factor in univariate analysis (HR 2.2, 95% CI: 1.27-3.81), but in multivariate analysis, only CgA>10× ULN (HR: 1.81, 95% CI: 1.10-2.98) and neuron-specific enolase (NSE) >ULN (HR: 3.51, 95% CI: 2.26-5.46) were predictors for OS. Immunohistochemical staining for serotonin was positive in 28.6% of serotonin-secreting PNETs (one with carcinoid syndrome) and negative in all controls. Carcinoid syndrome is rare in patients with a PNET, but serotonin secretion occurs often. This is a negative prognostic factor for OS, but after correction for CgA and NSE, it is no longer a predictor and probably only a "not-so innocent bystander" in patients with high tumour burden. © 2017 John Wiley & Sons Ltd.

  20. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  1. The Multivariate Largest Lyapunov Exponent as an Age-Related Metric of Quiet Standing Balance

    PubMed Central

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang

    2015-01-01

    The largest Lyapunov exponent has been researched as a metric of the balance ability during human quiet standing. However, the sensitivity and accuracy of this measurement method are not good enough for clinical use. The present research proposes a metric of the human body's standing balance ability based on the multivariate largest Lyapunov exponent which can quantify the human standing balance. The dynamic multivariate time series of ankle, knee, and hip were measured by multiple electrical goniometers. Thirty-six normal people of different ages participated in the test. With acquired data, the multivariate largest Lyapunov exponent was calculated. Finally, the results of the proposed approach were analysed and compared with the traditional method, for which the largest Lyapunov exponent and power spectral density from the centre of pressure were also calculated. The following conclusions can be obtained. The multivariate largest Lyapunov exponent has a higher degree of differentiation in differentiating balance in eyes-closed conditions. The MLLE value reflects the overall coordination between multisegment movements. Individuals of different ages can be distinguished by their MLLE values. The standing stability of human is reduced with the increment of age. PMID:26064182

  2. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  3. S-Wave Normal Mode Propagation in Aluminum Cylinders

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2010-01-01

    Large amplitude waveform features have been identified in pulse-transmission shear-wave measurements through cylinders that are long relative to the acoustic wavelength. The arrival times and amplitudes of these features do not follow the predicted behavior of well-known bar waves, but instead they appear to propagate with group velocities that increase as the waveform feature's dominant frequency increases. To identify these anomalous features, the wave equation is solved in a cylindrical coordinate system using an infinitely long cylinder with a free surface boundary condition. The solution indicates that large amplitude normal-mode propagations exist. Using the high-frequency approximation of the Bessel function, an approximate dispersion relation is derived. The predicted amplitude and group velocities using the approximate dispersion relation qualitatively agree with measured values at high frequencies, but the exact dispersion relation should be used to analyze normal modes for full ranges of frequency of interest, particularly at lower frequencies.

  4. A Cyber-Attack Detection Model Based on Multivariate Analyses

    NASA Astrophysics Data System (ADS)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  5. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  6. Normalization of Gravitational Acceleration Models

    NASA Technical Reports Server (NTRS)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  7. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  8. Measuring Treasury Bond Portfolio Risk and Portfolio Optimization with a Non-Gaussian Multivariate Model

    NASA Astrophysics Data System (ADS)

    Dong, Yijun

    The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.

  9. Characterizations of linear sufficient statistics

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.

    1977-01-01

    A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.

  10. Application of a multivariate normal distribution methodology to the dissociation of doubly ionized molecules: The DMDS (CH3 -SS-CH3 ) case.

    PubMed

    Varas, Lautaro R; Pontes, F C; Santos, A C F; Coutinho, L H; de Souza, G G B

    2015-09-15

    The ion-ion-coincidence mass spectroscopy technique brings useful information about the fragmentation dynamics of doubly and multiply charged ionic species. We advocate the use of a matrix-parameter methodology in order to represent and interpret the entire ion-ion spectra associated with the ionic dissociation of doubly charged molecules. This method makes it possible, among other things, to infer fragmentation processes and to extract information about overlapped ion-ion coincidences. This important piece of information is difficult to obtain from other previously described methodologies. A Wiley-McLaren time-of-flight mass spectrometer was used to discriminate the positively charged fragment ions resulting from the sample ionization by a pulsed 800 eV electron beam. We exemplify the application of this methodology by analyzing the fragmentation and ionic dissociation of the dimethyl disulfide (DMDS) molecule as induced by fast electrons. The doubly charged dissociation was analyzed using the Multivariate Normal Distribution. The ion-ion spectrum of the DMDS molecule was obtained at an incident electron energy of 800 eV and was matrix represented using the Multivariate Distribution theory. The proposed methodology allows us to distinguish information among [CH n SH n ] + /[CH 3 ] + (n = 1-3) fragment ions in the ion-ion coincidence spectra using ion-ion coincidence data. Using the momenta balance methodology for the inferred parameters, a secondary decay mechanism is proposed for the [CHS] + ion formation. As an additional check on the methodology, previously published data on the SiF 4 molecule was re-analyzed with the present methodology and the results were shown to be statistically equivalent. The use of a Multivariate Normal Distribution allows for the representation of the whole ion-ion mass spectrum of doubly or multiply ionized molecules as a combination of parameters and the extraction of information among overlapped data. We have successfully applied this methodology to the analysis of the fragmentation of the DMDS molecule. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Abnormal lignin in a loblolly pine mutant.

    PubMed

    Ralph, J; MacKay, J J; Hatfield, R D; O'Malley, D M; Whetten, R W; Sederoff, R R

    1997-07-11

    Novel lignin is formed in a mutant loblolly pine (Pinus taeda L.) severely depleted in cinnamyl alcohol dehydrogenase (E.C. 1.1.1.195), which converts coniferaldehyde to coniferyl alcohol, the primary lignin precursor in pines. Dihydroconiferyl alcohol, a monomer not normally associated with the lignin biosynthetic pathway, is the major component of the mutant's lignin, accounting for approximately 30 percent (versus approximately 3 percent in normal pine) of the units. The level of aldehydes, including new 2-methoxybenzaldehydes, is also increased. The mutant pines grew normally indicating that, even within a species, extensive variations in lignin composition need not disrupt the essential functions of lignin.

  12. Down-regulated PAR-2 is associated in part with interrupted melanosome transfer in pigmented basal cell epithelioma.

    PubMed

    Sakuraba, Kazuko; Hayashi, Nobukazu; Kawashima, Makoto; Imokawa, Genji

    2004-08-01

    In pigmented basal cell epithelioma (BCE), there seems to be an abnormal transfer of melanized melanosomes from proliferating melanocytes to basaloid tumor cells. In this study, the interruption of that melanosome transfer was studied with special respect to the altered function of a phagocytic receptor, protease-activated receptor (PAR)-2 in the basaloid tumor cells. We used electron microscopy to clarify the disrupted transfer at the ultrastructural level and then performed immunohistochemistry and reverse transcription-polymerase chain reaction (RT-PCR) to examine the regulation of a phagocytic receptor, PAR-2, expressed on basaloid tumor cells. Electron microscopic analysis revealed that basaloid tumor cells of pigmented BCE have a significantly lower population of melanosomes ( approximately 16.4%) than do normal keratinocytes located in the perilesional normal epidermis ( approximately 91.0%). In contrast, in pigmented seborrheic keratosis (SK), a similarly pigmented epidermal tumor, the distribution of melanin granules does not differ between the lesional ( approximately 93.9%) and the perilesional normal epidermis ( approximately 92.2 %), indicating that interrupted melanosome transfer occurs in BCE but not in all pigmented epithelial tumors. RT-PCR analysis demonstrated that the expression of PAR-2 mRNA transcripts in basaloid cells is significantly decreased in pigmented BCE compared with the perilesional normal epidermis. In contrast, in pigmented SK, where melanosome transfer to basaloid tumor cells is not interrupted, the expression of PAR-2 mRNA transcripts is comparable between the basaloid tumor cells and the perilesional normal epidermis. Immunohistochemistry demonstrated that basaloid cells in pigmented BCE have less immunostaining for PAR-2 than do keratinocytes in the perilesional normal epidermis whereas in pigmented SK, there is no difference in immunostaining for PAR-2 between the basaloid tumor and the perilesional normal epidermis. These findings suggest that the decreased expression of PAR-2 in the basaloid cells is associated in part with the observed interruption of melanosome transfer in pigmented BCE.

  13. On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change

    ERIC Educational Resources Information Center

    Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von

    2006-01-01

    We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…

  14. Automatic and objective oral cancer diagnosis by Raman spectroscopic detection of keratin with multivariate curve resolution analysis

    NASA Astrophysics Data System (ADS)

    Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.

    2016-01-01

    We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.

  15. Classification of Fusarium-Infected Korean Hulled Barley Using Near-Infrared Reflectance Spectroscopy and Partial Least Squares Discriminant Analysis

    PubMed Central

    Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Oh, Kyoungmin; Yoo, Hyeonchae; Ham, Hyeonheui; Kim, Moon S.

    2017-01-01

    The purpose of this study is to use near-infrared reflectance (NIR) spectroscopy equipment to nondestructively and rapidly discriminate Fusarium-infected hulled barley. Both normal hulled barley and Fusarium-infected hulled barley were scanned by using a NIR spectrometer with a wavelength range of 1175 to 2170 nm. Multiple mathematical pretreatments were applied to the reflectance spectra obtained for Fusarium discrimination and the multivariate analysis method of partial least squares discriminant analysis (PLS-DA) was used for discriminant prediction. The PLS-DA prediction model developed by applying the second-order derivative pretreatment to the reflectance spectra obtained from the side of hulled barley without crease achieved 100% accuracy in discriminating the normal hulled barley and the Fusarium-infected hulled barley. These results demonstrated the feasibility of rapid discrimination of the Fusarium-infected hulled barley by combining multivariate analysis with the NIR spectroscopic technique, which is utilized as a nondestructive detection method. PMID:28974012

  16. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Potential of non-invasive esophagus cancer detection based on urine surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Shaohua; Wang, Lan; Chen, Weisheng; Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Li, Buhong; Chen, Rong

    2014-11-01

    Non-invasive esophagus cancer detection based on urine surface-enhanced Raman spectroscopy (SERS) analysis was presented. Urine SERS spectra were measured on esophagus cancer patients (n = 56) and healthy volunteers (n = 36) for control analysis. Tentative assignments of the urine SERS spectra indicated some interesting esophagus cancer-specific biomolecular changes, including a decrease in the relative content of urea and an increase in the percentage of uric acid in the urine of esophagus cancer patients compared to that of healthy subjects. Principal component analysis (PCA) combined with linear discriminant analysis (LDA) was employed to analyze and differentiate the SERS spectra between normal and esophagus cancer urine. The diagnostic algorithms utilizing a multivariate analysis method achieved a diagnostic sensitivity of 89.3% and specificity of 83.3% for separating esophagus cancer samples from normal urine samples. These results from the explorative work suggested that silver nano particle-based urine SERS analysis coupled with PCA-LDA multivariate analysis has potential for non-invasive detection of esophagus cancer.

  18. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  19. Infrared fiber optic probe evaluation of degenerative cartilage correlates to histological grading.

    PubMed

    Hanifi, Arash; Bi, Xiaohong; Yang, Xu; Kavukcuoglu, Beril; Lin, Ping Chang; DiCarlo, Edward; Spencer, Richard G; Bostrom, Mathias P G; Pleshko, Nancy

    2012-12-01

    Osteoarthritis (OA), a degenerative cartilage disease, results in alterations of the chemical and structural properties of tissue. Arthroscopic evaluation of full-depth tissue composition is limited and would require tissue harvesting, which is inappropriate in daily routine. Fourier transform infrared (FT-IR) spectroscopy is a modality based on molecular vibrations of matrix components that can be used in conjunction with fiber optics to acquire quantitative compositional data from the cartilage matrix. To develop a model based on infrared spectra of articular cartilage to predict the histological Mankin score as an indicator of tissue quality. Comparative laboratory study. Infrared fiber optic probe (IFOP) spectra were collected from nearly normal and more degraded regions of tibial plateau articular cartilage harvested during knee arthroplasty (N = 61). Each region was graded using a modified Mankin score. A multivariate partial least squares algorithm using second-derivative spectra was developed to predict the histological modified Mankin score. The partial least squares model derived from IFOP spectra predicted the modified Mankin score with a prediction error of approximately 1.4, which resulted in approximately 72% of the Mankin-scored tissues being predicted correctly and 96% being predicted within 1 grade of their true score. These data demonstrate that IFOP spectral parameters correlate with histological tissue grade and can be used to provide information on tissue composition. Infrared fiber optic probe studies have significant potential for the evaluation of cartilage tissue quality without the need for tissue harvest. Combined with arthroscopy, IFOP analysis could facilitate the definition of tissue margins in debridement procedures.

  20. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  1. Prevalence and Determinants of Suboptimal Vitamin D Levels in a Multiethnic Asian Population.

    PubMed

    Man, Ryan Eyn Kidd; Li, Ling-Jun; Cheng, Ching-Yu; Wong, Tien Yin; Lamoureux, Ecosse; Sabanayagam, Charumathi

    2017-03-22

    This population-based cross-sectional study examined the prevalence and risk factors of suboptimal vitamin D levels (assessed using circulating 25-hydroxycholecalciferol (25(OH)D)) in a multi-ethnic sample of Asian adults. Plasma 25(OH)D concentration of 1139 Chinese, Malay and Indians (40-80 years) were stratified into normal (≥30 ng/mL), and suboptimal (including insufficiency and deficiency, <30 ng/mL) based on the 2011 Endocrine Society Clinical Practice Guidelines. Logistic regression models were used to assess the associations of demographic, lifestyle and clinical risk factors with the outcome. Of the 1139 participants, 25(OH)D concentration was suboptimal in 76.1%. In multivariable models, age ≤65 years (compared to age >65 years), Malay and Indian ethnicities (compared to Chinese ethnicity), and higher body mass index, HbA1c, education and income levels were associated with suboptimal 25(OH)D concentration ( p < 0.05). In a population-based sample of Asian adults, approximately 75% had suboptimal 25(OH)D concentration. Targeted interventions and stricter reinforcements of existing guidelines for vitamin D supplementation are needed for groups at risk of vitamin D insufficiency/deficiency.

  2. Gaussian-based routines to impute categorical variables in health surveys.

    PubMed

    Yucel, Recai M; He, Yulei; Zaslavsky, Alan M

    2011-12-20

    The multivariate normal (MVN) distribution is arguably the most popular parametric model used in imputation and is available in most software packages (e.g., SAS PROC MI, R package norm). When it is applied to categorical variables as an approximation, practitioners often either apply simple rounding techniques for ordinal variables or create a distinct 'missing' category and/or disregard the nominal variable from the imputation phase. All of these practices can potentially lead to biased and/or uninterpretable inferences. In this work, we develop a new rounding methodology calibrated to preserve observed distributions to multiply impute missing categorical covariates. The major attractiveness of this method is its flexibility to use any 'working' imputation software, particularly those based on MVN, allowing practitioners to obtain usable imputations with small biases. A simulation study demonstrates the clear advantage of the proposed method in rounding ordinal variables and, in some scenarios, its plausibility in imputing nominal variables. We illustrate our methods on a widely used National Survey of Children with Special Health Care Needs where incomplete values on race posed a valid threat on inferences pertaining to disparities. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Prevalence and Determinants of Suboptimal Vitamin D Levels in a Multiethnic Asian Population

    PubMed Central

    Man, Ryan Eyn Kidd; Li, Ling-Jun; Cheng, Ching-Yu; Wong, Tien Yin; Lamoureux, Ecosse; Sabanayagam, Charumathi

    2017-01-01

    This population-based cross-sectional study examined the prevalence and risk factors of suboptimal vitamin D levels (assessed using circulating 25-hydroxycholecalciferol (25(OH)D)) in a multi-ethnic sample of Asian adults. Plasma 25(OH)D concentration of 1139 Chinese, Malay and Indians (40–80 years) were stratified into normal (≥30 ng/mL), and suboptimal (including insufficiency and deficiency, <30 ng/mL) based on the 2011 Endocrine Society Clinical Practice Guidelines. Logistic regression models were used to assess the associations of demographic, lifestyle and clinical risk factors with the outcome. Of the 1139 participants, 25(OH)D concentration was suboptimal in 76.1%. In multivariable models, age ≤65 years (compared to age >65 years), Malay and Indian ethnicities (compared to Chinese ethnicity), and higher body mass index, HbA1c, education and income levels were associated with suboptimal 25(OH)D concentration (p < 0.05). In a population-based sample of Asian adults, approximately 75% had suboptimal 25(OH)D concentration. Targeted interventions and stricter reinforcements of existing guidelines for vitamin D supplementation are needed for groups at risk of vitamin D insufficiency/deficiency. PMID:28327512

  4. Fast and Accurate Multivariate Gaussian Modeling of Protein Families: Predicting Residue Contacts and Protein-Interaction Partners

    PubMed Central

    Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061

  5. Simultaneous quantification of the boar-taint compounds skatole and androstenone by surface-enhanced Raman scattering (SERS) and multivariate data analysis.

    PubMed

    Sørensen, Klavs M; Westley, Chloe; Goodacre, Royston; Engelsen, Søren Balling

    2015-10-01

    This study investigates the feasibility of using surface-enhanced Raman scattering (SERS) for the quantification of absolute levels of the boar-taint compounds skatole and androstenone in porcine fat. By investigation of different types of nanoparticles, pH and aggregating agents, an optimized environment that promotes SERS of the analytes was developed and tested with different multivariate spectral pre-processing techniques, and this was combined with variable selection on a series of analytical standards. The resulting method exhibited prediction errors (root mean square error of cross validation, RMSECV) of 2.4 × 10(-6) M skatole and 1.2 × 10(-7) M androstenone, with a limit of detection corresponding to approximately 2.1 × 10(-11) M for skatole and approximately 1.8 × 10(-10) for androstenone. The method was subsequently tested on porcine fat extract, leading to prediction errors (RMSECV) of 0.17 μg/g for skatole and 1.5 μg/g for androstenone. It is clear that this optimized SERS method, when combined with multivariate analysis, shows great potential for optimization into an on-line application, which will be the first of its kind, and opens up possibilities for simultaneous detection of other meat-quality metabolites or pathogen markers. Graphical abstract Artistic rendering of a laser-illuminated gold colloid sphere with skatole and androstenone adsorbed on the surface.

  6. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  7. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  8. A Method for Approximating the Bivariate Normal Correlation Coefficient.

    ERIC Educational Resources Information Center

    Kirk, David B.

    Improvements of the Gaussian quadrature in conjunction with the Newton-Raphson iteration technique (TM 000 789) are discussed as effective methods of calculating the bivariate normal correlation coefficient. (CK)

  9. Luttinger theorem and imbalanced Fermi systems

    NASA Astrophysics Data System (ADS)

    Pieri, Pierbiagio; Strinati, Giancarlo Calvanese

    2017-04-01

    The proof of the Luttinger theorem, which was originally given for a normal Fermi liquid with equal spin populations formally described by the exact many-body theory at zero temperature, is here extended to an approximate theory given in terms of a "conserving" approximation also with spin imbalanced populations. The need for this extended proof, whose underlying assumptions are here spelled out in detail, stems from the recent interest in superfluid trapped Fermi atoms with attractive inter-particle interaction, for which the difference between two spin populations can be made large enough that superfluidity is destroyed and the system remains normal even at zero temperature. In this context, we will demonstrate the validity of the Luttinger theorem separately for the two spin populations for any "Φ-derivable" approximation, and illustrate it in particular for the self-consistent t-matrix approximation.

  10. Functional Relationships and Regression Analysis.

    ERIC Educational Resources Information Center

    Preece, Peter F. W.

    1978-01-01

    Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…

  11. Estimating the Classification Efficiency of a Test Battery.

    ERIC Educational Resources Information Center

    De Corte, Wilfried

    2000-01-01

    Shows how a theorem proven by H. Brogden (1951, 1959) can be used to estimate the allocation average (a predictor based classification of a test battery) assuming that the predictor intercorrelations and validities are known and that the predictor variables have a joint multivariate normal distribution. (SLD)

  12. Quasi-biennial (QBO), annual (AO), and semi-annual oscillation (SAO) in stratospheric SCIAMACHY O3, NO2, and BrO limb data using a multivariate least squares approach

    NASA Astrophysics Data System (ADS)

    Dikty, Sebastian; von Savigny, Christian; Sinnhuber, Bjoern-Martin; Rozanov, Alexej; Weber, Mark; Burrows, John P.

    We use SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartog-raphY) ozone, nitrogen dioxide and bromine oxide profiles (20-50 km altitude, 2003-2008) to quantify the amplitudes of QBO, AO, and SAO signals with the help of a simple multivariate regression model. The analysis is being carried out with SCIAMACHY data covering all lat-itudes with the exception of polar nights, when measurements are not available. The overall global yield is approximately 10,000 profiles per month, which are binned into 10-steps with one zonal mean profile being calculated per day and per latitude bin.

  13. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  14. Discrimination and prediction of cultivation age and parts of Panax ginseng by Fourier-transform infrared spectroscopy combined with multivariate statistical analysis.

    PubMed

    Lee, Byeong-Ju; Kim, Hye-Youn; Lim, Sa Rang; Huang, Linfang; Choi, Hyung-Kyoon

    2017-01-01

    Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values.

  15. Discrimination and prediction of cultivation age and parts of Panax ginseng by Fourier-transform infrared spectroscopy combined with multivariate statistical analysis

    PubMed Central

    Lim, Sa Rang; Huang, Linfang

    2017-01-01

    Panax ginseng C.A. Meyer is a herb used for medicinal purposes, and its discrimination according to cultivation age has been an important and practical issue. This study employed Fourier-transform infrared (FT-IR) spectroscopy with multivariate statistical analysis to obtain a prediction model for discriminating cultivation ages (5 and 6 years) and three different parts (rhizome, tap root, and lateral root) of P. ginseng. The optimal partial-least-squares regression (PLSR) models for discriminating ginseng samples were determined by selecting normalization methods, number of partial-least-squares (PLS) components, and variable influence on projection (VIP) cutoff values. The best prediction model for discriminating 5- and 6-year-old ginseng was developed using tap root, vector normalization applied after the second differentiation, one PLS component, and a VIP cutoff of 1.0 (based on the lowest root-mean-square error of prediction value). In addition, for discriminating among the three parts of P. ginseng, optimized PLSR models were established using data sets obtained from vector normalization, two PLS components, and VIP cutoff values of 1.5 (for 5-year-old ginseng) and 1.3 (for 6-year-old ginseng). To our knowledge, this is the first study to provide a novel strategy for rapidly discriminating the cultivation ages and parts of P. ginseng using FT-IR by selected normalization methods, number of PLS components, and VIP cutoff values. PMID:29049369

  16. Root Cause Analysis of Quality Defects Using HPLC-MS Fingerprint Knowledgebase for Batch-to-batch Quality Control of Herbal Drugs.

    PubMed

    Yan, Binjun; Fang, Zhonghua; Shen, Lijuan; Qu, Haibin

    2015-01-01

    The batch-to-batch quality consistency of herbal drugs has always been an important issue. To propose a methodology for batch-to-batch quality control based on HPLC-MS fingerprints and process knowledgebase. The extraction process of Compound E-jiao Oral Liquid was taken as a case study. After establishing the HPLC-MS fingerprint analysis method, the fingerprints of the extract solutions produced under normal and abnormal operation conditions were obtained. Multivariate statistical models were built for fault detection and a discriminant analysis model was built using the probabilistic discriminant partial-least-squares method for fault diagnosis. Based on multivariate statistical analysis, process knowledge was acquired and the cause-effect relationship between process deviations and quality defects was revealed. The quality defects were detected successfully by multivariate statistical control charts and the type of process deviations were diagnosed correctly by discriminant analysis. This work has demonstrated the benefits of combining HPLC-MS fingerprints, process knowledge and multivariate analysis for the quality control of herbal drugs. Copyright © 2015 John Wiley & Sons, Ltd.

  17. The association between body mass index and severe biliary infections: a multivariate analysis.

    PubMed

    Stewart, Lygia; Griffiss, J McLeod; Jarvis, Gary A; Way, Lawrence W

    2012-11-01

    Obesity has been associated with worse infectious disease outcomes. It is a risk factor for cholesterol gallstones, but little is known about associations between body mass index (BMI) and biliary infections. We studied this using factors associated with biliary infections. A total of 427 patients with gallstones were studied. Gallstones, bile, and blood (as applicable) were cultured. Illness severity was classified as follows: none (no infection or inflammation), systemic inflammatory response syndrome (fever, leukocytosis), severe (abscess, cholangitis, empyema), or multi-organ dysfunction syndrome (bacteremia, hypotension, organ failure). Associations between BMI and biliary bacteria, bacteremia, gallstone type, and illness severity were examined using bivariate and multivariate analysis. BMI inversely correlated with pigment stones, biliary bacteria, bacteremia, and increased illness severity on bivariate and multivariate analysis. Obesity correlated with less severe biliary infections. BMI inversely correlated with pigment stones and biliary bacteria; multivariate analysis showed an independent correlation between lower BMI and illness severity. Most patients with severe biliary infections had a normal BMI, suggesting that obesity may be protective in biliary infections. This study examined the correlation between BMI and biliary infection severity. Published by Elsevier Inc.

  18. Does tip-of-the-tongue for proper names discriminate amnestic mild cognitive impairment?

    PubMed

    Juncos-Rabadán, Onésimo; Facal, David; Lojo-Seoane, Cristina; Pereiro, Arturo X

    2013-04-01

    Difficulty in retrieving people's names is very common in the early stages of Alzheimer's disease and mild cognitive impairment. Such difficulty is often observed as the tip-of-the-tongue (TOT) phenomenon. The main aim of this study was to explore whether a famous people's naming task that elicited the TOT state can be used to discriminate between amnestic mild cognitive impairment (aMCI) patients and normal controls. Eighty-four patients with aMCI and 106 normal controls aged over 50 years performed a task involving naming 50 famous people shown in pictures. Univariate and multivariate regression analyses were used to study the relationships between aMCI and semantic and phonological measures in the TOT paradigm. Univariate regression analyses revealed that all TOT measures significantly predicted aMCI. Multivariate analysis of all these measures correctly classified 70% of controls (specificity) and 71.6% of aMCI patients (sensitivity), with an AUC (area under curve ROC) value of 0.74, but only the phonological measure remained significant. This classification value was similar to that obtained with the Semantic verbal fluency test. TOTs for proper names may effectively discriminate aMCI patients from normal controls through measures that represent one of the naming processes affected, that is, phonological access.

  19. Strategies for Optimal Control Design of Normal Acceleration Command Following on the F-16

    DTIC Science & Technology

    1992-12-01

    Padd approximation. This approximation has a pole at -40, and introduces a nonminimum phase zero at +40. In deriving the equation for normal acceleration...input signal. The mean not being exactly zero will surface in some simulation plots, but does not alter the point of showing general trends. Also...closer to reality, I will ’know that my goal has been accomplished. My honest belief is that general mixed H2/H.. optimization is the methodology of

  20. A hybrid clustering approach for multivariate time series - A case study applied to failure analysis in a gas turbine.

    PubMed

    Fontes, Cristiano Hora; Budman, Hector

    2017-11-01

    A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Arrowheaded enhanced multivariance products representation for matrices (AEMPRM): Specifically focusing on infinite matrices and converting arrowheadedness to tridiagonality

    NASA Astrophysics Data System (ADS)

    Özdemir, Gizem; Demiralp, Metin

    2015-12-01

    In this work, Enhanced Multivariance Products Representation (EMPR) approach which is a Demiralp-and-his- group extension to the Sobol's High Dimensional Model Representation (HDMR) has been used as the basic tool. Their discrete form have also been developed and used in practice by Demiralp and his group in addition to some other authors for the decomposition of the arrays like vectors, matrices, or multiway arrays. This work specifically focuses on the decomposition of infinite matrices involving denumerable infinitely many rows and columns. To this end the target matrix is first decomposed to the sum of certain outer products and then each outer product is treated by Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) which has been developed by Demiralp and his group. The result is a three-matrix- factor-product whose kernel (the middle factor) is an arrowheaded matrix while the pre and post factors are invertable matrices decomposed of the support vectors of TMEMPR. This new method is called as Arrowheaded Enhanced Multivariance Products Representation for Matrices. The general purpose is approximation of denumerably infinite matrices with the new method.

  2. Considerations in cross-validation type density smoothing with a look at some data

    NASA Technical Reports Server (NTRS)

    Schuster, E. F.

    1982-01-01

    Experience gained in applying nonparametric maximum likelihood techniques of density estimation to judge the comparative quality of various estimators is reported. Two invariate data sets of one hundered samples (one Cauchy, one natural normal) are considered as well as studies in the multivariate case.

  3. Fitting and Testing Conditional Multinormal Partial Credit Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2012-01-01

    A multinormal partial credit model for factor analysis of polytomously scored items with ordered response categories is derived using an extension of the Dutch Identity (Holland in "Psychometrika" 55:5-18, 1990). In the model, latent variables are assumed to have a multivariate normal distribution conditional on unweighted sums of item…

  4. Simultaneous Inference Procedures for Means.

    ERIC Educational Resources Information Center

    Krishnaiah, P. R.

    Some aspects of simultaneous tests for means are reviewed. Specifically, the comparison of univariate or multivariate normal populations based on the values of the means or mean vectors when the variances or covariance matrices are equal is discussed. Tukey's and Dunnett's tests for multiple comparisons of means, Scheffe's method of examining…

  5. Disfluency in Spasmodic Dysphonia: A Multivariate Analysis.

    ERIC Educational Resources Information Center

    Cannito, Michael P.; Burch, Annette Renee; Watts, Christopher; Rappold, Patrick W.; Hood, Stephen B.; Sherrard, Kyla

    1997-01-01

    This study examined visual analog scaling judgments of disfluency by normal listeners in response to oral reading by 20 adults with spasmodic dysphonia (SD) and nondysphonic controls. Findings suggest that although dysfluency is not a defining feature of SD, it does contribute significantly to the overall clinical impression of severity of the…

  6. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design

    PubMed Central

    Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260

  7. Caspase-3 activity, response to chemotherapy and clinical outcome in patients with colon cancer.

    PubMed

    de Oca, Javier; Azuara, Daniel; Sanchez-Santos, Raquel; Navarro, Matilde; Capella, Gabriel; Moreno, Victor; Sola, Anna; Hotter, Georgina; Biondo, Sebastiano; Osorio, Alfonso; Martí-Ragué, Joan; Rafecas, Antoni

    2008-01-01

    The prognostic value of the degree of apoptosis in colorectal cancer is controversial. This study evaluates the putative clinical usefulness of measuring caspase-3 activity as a prognostic factor in colonic cancer patients receiving 5-fluoracil adjuvant chemotherapy. We evaluated caspase-3-like protease activity in tumours and in normal colon tissue. Specimens were studied from 54 patients. These patients had either stage III cancer (Dukes stage C) or high-risk stage II cancer (Dukes stage B2 with invasion of adjacent organs, lymphatic or vascular infiltration or carcinoembryonic antigen [CEA] >5). Median follow-up was 73 months. Univariate analysis was performed previously to explore the relation of different variables (age, sex, preoperative CEA, tumour size, Dukes stage, vascular invasion, lymphatic invasion, caspase-3 activity in tumour and caspase-3 activity in normal mucosa) as prognostic factors of tumour recurrence after chemotherapy treatment. Subsequently, a multivariate Cox regression model was performed. Median values of caspase-3 activity in tumours were more than twice those in normal mucosa (88.1 vs 40.6 U, p=0.001), showing a statistically significant correlation (r=0.34). Significant prognostic factors of recurrence in multivariate analysis were: male sex (odds ratio, OR=3.53 [1.13-10.90], p=0.02), age (OR=1.09 [1.01-1.18], p=0.03), Dukes stage (OR=1.93 [1.01-3.70]), caspase-3 activity in normal mucosa (OR=1.02 [1.01-1.04], p=0.017) and caspase-3 activity in tumour (OR=1.02 [1.01-1.03], p=0.013). Low caspase-3 activity in the normal mucosa and tumour are independent prognostic factors of tumour recurrence in patients receiving adjuvant 5-fluoracil-based treatment in colon cancer, correlating with poor disease-free survival and higher recurrence rate.

  8. Multivariate Copula Analysis Toolbox (MvCAT): Describing dependence and underlying uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir

    2017-06-01

    We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.

  9. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  10. Elemental analysis of tissue pellets for the differentiation of epidermal lesion and normal skin by laser-induced breakdown spectroscopy

    PubMed Central

    Moon, Youngmin; Han, Jung Hyun; Shin, Sungho; Kim, Yong-Chul; Jeong, Sungho

    2016-01-01

    By laser induced breakdown spectroscopy (LIBS) analysis of epidermal lesion and dermis tissue pellets of hairless mouse, it is shown that Ca intensity in the epidermal lesion is higher than that in dermis, whereas Na and K intensities have an opposite tendency. It is demonstrated that epidermal lesion and normal dermis can be differentiated with high selectivity either by univariate or multivariate analysis of LIBS spectra with an intensity ratio difference by factor of 8 or classification accuracy over 0.995, respectively. PMID:27231610

  11. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  12. High-Frequency Normal Mode Propagation in Aluminum Cylinders

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2009-01-01

    Acoustic measurements made using compressional-wave (P-wave) and shear-wave (S-wave) transducers in aluminum cylinders reveal waveform features with high amplitudes and with velocities that depend on the feature's dominant frequency. In a given waveform, high-frequency features generally arrive earlier than low-frequency features, typical for normal mode propagation. To analyze these waveforms, the elastic equation is solved in a cylindrical coordinate system for the high-frequency case in which the acoustic wavelength is small compared to the cylinder geometry, and the surrounding medium is air. Dispersive P- and S-wave normal mode propagations are predicted to exist, but owing to complex interference patterns inside a cylinder, the phase and group velocities are not smooth functions of frequency. To assess the normal mode group velocities and relative amplitudes, approximate dispersion relations are derived using Bessel functions. The utility of the normal mode theory and approximations from a theoretical and experimental standpoint are demonstrated by showing how the sequence of P- and S-wave normal mode arrivals can vary between samples of different size, and how fundamental normal modes can be mistaken for the faster, but significantly smaller amplitude, P- and S-body waves from which P- and S-wave speeds are calculated.

  13. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  14. Multivariate Markov chain modeling for stock markets

    NASA Astrophysics Data System (ADS)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  15. PI-RADS version 2: Preoperative role in the detection of normal-sized pelvic lymph node metastasis in prostate cancer.

    PubMed

    Park, Sung Yoon; Shin, Su-Jin; Jung, Dae Chul; Cho, Nam Hoon; Choi, Young Deuk; Rha, Koon Ho; Hong, Sung Joon; Oh, Young Taik

    2017-06-01

    To analyze whether Prostate Imaging Reporting and Data System (PI-RADSv2) scores are associated with a risk of normal-sized pelvic lymph node metastasis (PLNM) in prostate cancer (PCa). A consecutive series of 221 patients who underwent magnetic resonance imaging and radical prostatectomy with pelvic lymph node dissection (PLND) for PCa were retrospectively analyzed under the approval of institutional review board in our institution. No patients had enlarged (≥0.8cm in short-axis diameter) lymph nodes. Clinical parameters [prostate-specific antigen (PSA), greatest percentage of biopsy core, and percentage of positive cores], and PI-RADSv2 score from two independent readers were analyzed with multivariate logistic regression and receiver operating-characteristic curve for PLNM. Diagnostic performance of PI-RADSv2 and Briganti nomogram was compared. Weighted kappa was investigated regarding PI-RADSv2 scoring. Normal-sized PLNM was found in 9.5% (21/221) of patients. In multivariate analysis, PI-RADSv2 (reader 1, p=0.009; reader 2, p=0.026) and PSA (reader 1, p=0.008; reader 2, p=0.037) were predictive of normal-sized PLNM. Threshold of PI-RADSv2 was a score of 5, where PI-RADSv2 was associated with high sensitivity (reader 1, 95.2% [20/21]; reader 2, 90.5% [19/21]) and negative predictive value (reader 1, 99.2% [124/125]; reader 2, 98.6% [136/138]). However, diagnostic performance of PI-RADSv2 (AUC=0.786-0.788) was significantly lower than that of Briganti nomogram (AUC=0.890) for normal-sized PLNM (p<0.05). The inter-reader agreement was excellent for PI-RADSv2 of 5 or not (weighted kappa=0.804). PI-RADSv2 scores may be associated with the risk of normal-sized PLNM in PCa. Copyright © 2017. Published by Elsevier B.V.

  16. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  17. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  18. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  19. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  20. Measurement of Physiologic Glucose Levels Using Raman Spectroscopy in a Rabbit Aqueous Humor Model

    NASA Technical Reports Server (NTRS)

    Lambert, J.; Storrie-Lombardi, M.; Borchert, M.

    1998-01-01

    We have elecited a reliable glucose signature in mammalian physiological ranges using near infrared Raman laser excitation at 785 nm and multivariate analysis. In a recent series of experiments we measured glucose levels in an artificial aqueous humor in the range from 0.5 to 13X normal values.

  1. Fine-Tuning Cross-Battery Assessment Procedures: After Follow-Up Testing, Use All Valid Scores, Cohesive or Not

    ERIC Educational Resources Information Center

    Schneider, W. Joel; Roman, Zachary

    2018-01-01

    We used data simulations to test whether composites consisting of cohesive subtest scores are more accurate than composites consisting of divergent subtest scores. We demonstrate that when multivariate normality holds, divergent and cohesive scores are equally accurate. Furthermore, excluding divergent scores results in biased estimates of…

  2. Estimation of Latent Group Effects: Psychometric Technical Report No. 2.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    Conventional methods of multivariate normal analysis do not apply when the variables of interest are not observed directly, but must be inferred from fallible or incomplete data. For example, responses to mental test items may depend upon latent aptitude variables, which modeled in turn as functions of demographic effects in the population. A…

  3. Robust Optimum Invariant Tests for Random MANOVA Models.

    DTIC Science & Technology

    1986-10-01

    are assumed to be independent normal with zero mean and dispersion o2 and o72 respectively, Roy and Gnanadesikan (1959) considered the prob- 2 2 lem of...Part II: The multivariate case. Ann. Math. Statist. 31, 939-968. [7] Roy, S.N. and Gnanadesikan , R. (1959). Some contributions to ANOVA in one or more

  4. Buried landmine detection using multivariate normal clustering

    NASA Astrophysics Data System (ADS)

    Duston, Brian M.

    2001-10-01

    A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.

  5. Evaluating effects of methylphenidate on brain activity in cocaine addiction: a machine-learning approach

    NASA Astrophysics Data System (ADS)

    Rish, Irina; Bashivan, Pouya; Cecchi, Guillermo A.; Goldstein, Rita Z.

    2016-03-01

    The objective of this study is to investigate effects of methylphenidate on brain activity in individuals with cocaine use disorder (CUD) using functional MRI (fMRI). Methylphenidate hydrochloride (MPH) is an indirect dopamine agonist commonly used for treating attention deficit/hyperactivity disorders; it was also shown to have some positive effects on CUD subjects, such as improved stop signal reaction times associated with better control/inhibition,1 as well as normalized task-related brain activity2 and resting-state functional connectivity in specific areas.3 While prior fMRI studies of MPH in CUDs have focused on mass-univariate statistical hypothesis testing, this paper evaluates multivariate, whole-brain effects of MPH as captured by the generalization (prediction) accuracy of different classification techniques applied to features extracted from resting-state functional networks (e.g., node degrees). Our multivariate predictive results based on resting-state data from3 suggest that MPH tends to normalize network properties such as voxel degrees in CUD subjects, thus providing additional evidence for potential benefits of MPH in treating cocaine addiction.

  6. Surrogacy assessment using principal stratification when surrogate and outcome measures are multivariate normal.

    PubMed

    Conlon, Anna S C; Taylor, Jeremy M G; Elliott, Michael R

    2014-04-01

    In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21-29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431-440). The method is applied to data from a macular degeneration study and an ovarian cancer study.

  7. Surrogacy assessment using principal stratification when surrogate and outcome measures are multivariate normal

    PubMed Central

    Conlon, Anna S. C.; Taylor, Jeremy M. G.; Elliott, Michael R.

    2014-01-01

    In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21–29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431–440). The method is applied to data from a macular degeneration study and an ovarian cancer study. PMID:24285772

  8. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    NASA Astrophysics Data System (ADS)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  9. The association between a body shape index and cardiovascular risk in overweight and obese children and adolescents.

    PubMed

    Mameli, Chiara; Krakauer, Nir Y; Krakauer, Jesse C; Bosetti, Alessandra; Ferrari, Chiara Matilde; Moiana, Norma; Schneider, Laura; Borsani, Barbara; Genoni, Teresa; Zuccotti, Gianvincenzo

    2018-01-01

    A Body Shape Index (ABSI) and normalized hip circumference (Hip Index, HI) have been recently shown to be strong risk factors for mortality and for cardiovascular disease in adults. We conducted an observational cross-sectional study to evaluate the relationship between ABSI, HI and cardiometabolic risk factors and obesity-related comorbidities in overweight and obese children and adolescents aged 2-18 years. We performed multivariate linear and logistic regression analyses with BMI, ABSI, and HI age and sex normalized z scores as predictors to examine the association with cardiometabolic risk markers (systolic and diastolic blood pressure, fasting glucose and insulin, total cholesterol and its components, transaminases, fat mass % detected by bioelectrical impedance analysis) and obesity-related conditions (including hepatic steatosis and metabolic syndrome). We recruited 217 patients (114 males), mean age 11.3 years. Multivariate linear regression showed a significant association of ABSI z score with 10 out of 15 risk markers expressed as continuous variables, while BMI z score showed a significant correlation with 9 and HI only with 1. In multivariate logistic regression to predict occurrence of obesity-related conditions and above-threshold values of risk factors, BMI z score was significantly correlated to 7 out of 12, ABSI to 5, and HI to 1. Overall, ABSI is an independent anthropometric index that was significantly associated with cardiometabolic risk markers in a pediatric population affected by overweight and obesity.

  10. Body composition status and the risk of migraine: A meta-analysis.

    PubMed

    Gelaye, Bizu; Sacco, Simona; Brown, Wendy J; Nitchie, Haley L; Ornello, Raffaele; Peterlin, B Lee

    2017-05-09

    To evaluate the association between migraine and body composition status as estimated based on body mass index and WHO physical status categories. Systematic electronic database searches were conducted for relevant studies. Two independent reviewers performed data extraction and quality appraisal. Odds ratios (OR) and confidence intervals (CI) were pooled using a random effects model. Significant values, weighted effect sizes, and tests of homogeneity of variance were calculated. A total of 12 studies, encompassing data from 288,981 unique participants, were included. The age- and sex-adjusted pooled risk of migraine in those with obesity was increased by 27% compared with those of normal weight (odds ratio [OR] 1.27; 95% confidence interval [CI] 1.16-1.37, p < 0.001) and remained increased after multivariate adjustments. Although the age- and sex-adjusted pooled migraine risk was increased in overweight individuals (OR 1.08; 95% CI 1.04, 1.12, p < 0.001), significance was lost after multivariate adjustments. The age- and sex-adjusted pooled risk of migraine in underweight individuals was marginally increased by 13% compared with those of normal weight (OR 1.13; 95% CI 1.02, 1.24, p < 0.001) and remained increased after multivariate adjustments. The current body of evidence shows that the risk of migraine is increased in obese and underweight individuals. Studies are needed to confirm whether interventions that modify obesity status decrease the risk of migraine. © 2017 American Academy of Neurology.

  11. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design.

    PubMed

    Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo

    2017-03-15

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. General Multivariate Linear Modeling of Surface Shapes Using SurfStat

    PubMed Central

    Chung, Moo K.; Worsley, Keith J.; Nacewicz, Brendon, M.; Dalton, Kim M.; Davidson, Richard J.

    2010-01-01

    Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper present a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used as to parameterize, to smooth out, and to normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects. PMID:20620211

  13. Convergence of the standard RLS method and UDUT factorisation of covariance matrix for solving the algebraic Riccati equation of the DLQR via heuristic approximate dynamic programming

    NASA Astrophysics Data System (ADS)

    Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.

    2015-08-01

    The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.

  14. 1H NMR Metabolomics Study of Spleen from C57BL/6 Mice Exposed to Gamma Radiation

    PubMed Central

    Xiao, X; Hu, M; Liu, M; Hu, JZ

    2016-01-01

    Due to the potential risk of accidental exposure to gamma radiation, it’s critical to identify the biomarkers of radiation exposed creatures. In the present study, NMR based metabolomics combined with multivariate data analysis to evaluate the metabolites changed in the C57BL/6 mouse spleen after 4 days whole body exposure to 3.0 Gy and 7.8 Gy gamma radiations. Principal component analysis (PCA) and orthogonal projection to latent structures analysis (OPLS) are employed for classification and identification potential biomarkers associated with gamma irradiation. Two different strategies for NMR spectral data reduction (i.e., spectral binning and spectral deconvolution) are combined with normalize to constant sum and unit weight before multivariate data analysis, respectively. The combination of spectral deconvolution and normalization to unit weight is the best way for identifying discriminatory metabolites between the irradiation and control groups. Normalized to the constant sum may achieve some pseudo biomarkers. PCA and OPLS results shown that the exposed groups can be well separated from the control group. Leucine, 2-aminobutyrate, valine, lactate, arginine, glutathione, 2-oxoglutarate, creatine, tyrosine, phenylalanine, π-methylhistidine, taurine, myoinositol, glycerol and uracil are significantly elevated while ADP is decreased significantly. These significantly changed metabolites are associated with multiple metabolic pathways and may be potential biomarkers in the spleen exposed to gamma irradiation. PMID:27019763

  15. 1H NMR metabolomics study of spleen from C57BL/6 mice exposed to gamma radiation

    DOE PAGES

    Xiao, Xiongjie; Hu, M.; Liu, M.; ...

    2016-01-27

    Due to the potential risk of accidental exposure to gamma radiation, it’s critical to identify the biomarkers of radiation exposed creatures. In the present study, NMR based metabolomics combined with multivariate data analysis to evaluate the metabolites changed in the C57BL/6 mouse spleen after 4 days whole body exposure to 3.0 Gy and 7.8 Gy gamma radiations. Principal component analysis (PCA) and orthogonal projection to latent structures analysis (OPLS) are employed for classification and identification potential biomarkers associated with gamma irradiation. Two different strategies for NMR spectral data reduction (i.e., spectral binning and spectral deconvolution) are combined with normalize tomore » constant sum and unit weight before multivariate data analysis, respectively. The combination of spectral deconvolution and normalization to unit weight is the best way for identifying discriminatory metabolites between the irradiation and control groups. Normalized to the constant sum may achieve some pseudo biomarkers. PCA and OPLS results shown that the exposed groups can be well separated from the control group. Leucine, 2-aminobutyrate, valine, lactate, arginine, glutathione, 2-oxoglutarate, creatine, tyrosine, phenylalanine, π-methylhistidine, taurine, myoinositol, glycerol and uracil are significantly elevated while ADP is decreased significantly. As a result, these significantly changed metabolites are associated with multiple metabolic pathways and may be potential biomarkers in the spleen exposed to gamma irradiation.« less

  16. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  17. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments

    PubMed Central

    Avalappampatty Sivasamy, Aneetha; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668

  18. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments.

    PubMed

    Sivasamy, Aneetha Avalappampatty; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better.

  19. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Syngeneic Schwann cell transplantation preserves vision in RCS rat without immunosuppression.

    PubMed

    McGill, Trevor J; Lund, Raymond D; Douglas, Robert M; Wang, Shaomei; Lu, Bin; Silver, Byron D; Secretan, Matt R; Arthur, Jennifer N; Prusky, Glen T

    2007-04-01

    To evaluate the efficacy of immunologically compatible Schwann cells transplanted without immunosuppression in the RCS rat retina to preserve vision. Syngeneic (dystrophic RCS) Schwann cells harvested from sciatic nerves were cultured and transplanted into one eye of dystrophic RCS rats at an early stage of retinal degeneration. Allogeneic (Long-Evans) Schwann cells and unoperated eyes served as controls. Vision through transplanted and unoperated eyes was then quantified using two visual behavior tasks, one measuring the spatial frequency and contrast sensitivity thresholds of the optokinetic response (OKR) and the other measuring grating acuity in a perception task. Spatial frequency thresholds measured through syngeneically transplanted eyes maintained near normal spatial frequency sensitivity for approximately 30 weeks, whereas thresholds through control eyes deteriorated to less than 20% of normal over the same period. Contrast sensitivity was preserved through syngeneically transplanted eyes better than through allogeneic and unoperated eyes, at all spatial frequencies. Grating acuity measured through syngeneically transplanted eyes was maintained at approximately 60% of normal, whereas acuity of allogeneically transplanted eyes was significantly lower at approximately 40% of normal. The ability of immunoprivileged Schwann cell transplants to preserve vision in RCS rats indicates that transplantation of syngeneic Schwann cells holds promise as a preventive treatment for retinal degenerative disease.

  1. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  2. Sample entropy analysis of cervical neoplasia gene-expression signatures

    PubMed Central

    Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R

    2009-01-01

    Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110

  3. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  4. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

    PubMed

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  5. Declaratoria del IV Congreso Nacional de Educacion Normal (Declaration of the Fourth National Congress on Normal Education).

    ERIC Educational Resources Information Center

    El Maestro, Mexico, 1970

    1970-01-01

    This document is an English-language abstract (approximately 1,500 words) of a desclaration drawn up by the participants of the Fourth Mexican National Congress of Normal Education. The declaration points out the importance of teacher training in the educational system, the fundamental problems presently facing this level of studies and the…

  6. Serum potassium is a predictor of incident diabetes in African Americans with normal aldosterone: the Jackson Heart Study12

    PubMed Central

    Chatterjee, Ranee; Davenport, Clemontina A; Svetkey, Laura P; Batch, Bryan C; Lin, Pao-Hwa; Ramachandran, Vasan S; Fox, Ervin R; Harman, Jane; Yeh, Hsin-Chieh; Selvin, Elizabeth; Correa, Adolfo; Butler, Kenneth; Edelman, David

    2017-01-01

    Background: Low-normal potassium is a risk factor for diabetes and may account for some of the racial disparity in diabetes risk. Aldosterone affects serum potassium and is associated with insulin resistance. Objectives: We sought to confirm the association between potassium and incident diabetes in an African-American cohort, and to determine the effect of aldosterone on this association. Design: We studied participants from the Jackson Heart Study, an African-American adult cohort, who were without diabetes at baseline. With the use of logistic regression, we characterized the associations of serum, dietary, and urinary potassium with incident diabetes. In addition, we evaluated aldosterone as a potential effect modifier of these associations. Results: Of 2157 participants, 398 developed diabetes over 8 y. In a minimally adjusted model, serum potassium was a significant predictor of incident diabetes (OR: 0.83; 95% CI: 0.74, 0.92 per SD increment in serum potassium). In multivariable models, we found a significant interaction between serum potassium and aldosterone (P = 0.046). In stratified multivariable models, in those with normal aldosterone (<9 ng/dL, n = 1163), participants in the highest 2 potassium quartiles had significantly lower odds of incident diabetes than did those in the lowest potassium quartile [OR (95% CI): 0.61 (0.39, 0.97) and 0.54 (0.33, 0.90), respectively]. Among those with high-normal aldosterone (≥9 ng/dL, n = 202), we found no significant association between serum potassium and incident diabetes. In these stratified models, serum aldosterone was not a significant predictor of incident diabetes. We found no statistically significant associations between dietary or urinary potassium and incident diabetes. Conclusions: In this African-American cohort, we found that aldosterone may modify the association between serum potassium and incident diabetes. In participants with normal aldosterone, high-normal serum potassium was associated with a lower risk of diabetes than was low-normal serum potassium. Additional studies are warranted to determine whether serum potassium is a modifiable risk factor that could be a target for diabetes prevention. This trial was registered at clinicaltrials.gov as NCT00415415. PMID:27974310

  7. Trojan dynamics well approximated by a new Hamiltonian normal form

    NASA Astrophysics Data System (ADS)

    Páez, Rocío Isabel; Locatelli, Ugo

    2015-10-01

    We revisit a classical perturbative approach to the Hamiltonian related to the motions of Trojan bodies, in the framework of the planar circular restricted three-body problem, by introducing a number of key new ideas in the formulation. In some sense, we adapt the approach of Garfinkel to the context of the normal form theory and its modern techniques. First, we make use of Delaunay variables for a physically accurate representation of the system. Therefore, we introduce a novel manipulation of the variables so as to respect the natural behaviour of the model. We develop a normalization procedure over the fast angle which exploits the fact that singularities in this model are essentially related to the slow angle. Thus, we produce a new normal form, i.e. an integrable approximation to the Hamiltonian. We emphasize some practical examples of the applicability of our normalizing scheme, e.g. the estimation of the stable libration region. Finally, we compare the level curves produced by our normal form with surfaces of section provided by the integration of the non-normalized Hamiltonian, with very good agreement. Further precision tests are also provided. In addition, we give a step-by-step description of the algorithm, allowing for extensions to more complicated models.

  8. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  9. Diagonal dominance for the multivariable Nyquist array using function minimization

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1977-01-01

    A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.

  10. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  11. Prognostic relevance of Centromere protein H expression in esophageal carcinoma.

    PubMed

    Guo, Xian-Zhi; Zhang, Ge; Wang, Jun-Ye; Liu, Wan-Li; Wang, Fang; Dong, Ju-Qin; Xu, Li-Hua; Cao, Jing-Yan; Song, Li-Bing; Zeng, Mu-Sheng

    2008-08-13

    Many kinetochore proteins have been shown to be associated with human cancers. The aim of the present study was to clarify the expression of Centromere protein H (CENP-H), one of the fundamental components of the human active kinetochore, in esophageal carcinoma and its correlation with clinicopathological features. We examined the expression of CENP-H in immortalized esophageal epithelial cells as well as in esophageal carcinoma cells, and in 12 cases of esophageal carcinoma tissues and the paired normal esophageal tissues by RT-PCR and Western blot analysis. In addition, we analyzed CENP-H protein expression in 177 clinicopathologically characterized esophageal carcinoma cases by immunohistochemistry. Statistical analyses were applied to test for prognostic and diagnostic associations. The level of CENP-H mRNA and protein were higher in the immortalized cells, cancer cell lines and most cancer tissues than in normal control tissues. Immunohistochemistry showed that CENP-H was expressed in 127 of 171 ESCC cases (74.3%) and in 3 of 6 esophageal adenocarcinoma cases (50%). Statistical analysis of ESCC cases showed that there was a significant difference of CENP-H expression in patients categorized according to gender (P = 0.013), stage (P = 0.023) and T classification (P = 0.019). Patients with lower CENP-H expression had longer overall survival time than those with higher CENP-H expression. Multivariate analysis suggested that CENP-H expression was an independent prognostic marker for esophageal carcinoma patients. A prognostic value of CENP-H was also found in the subgroup of T3 approximately T4 and N0 tumor classification. Our results suggest that CENP-H protein is a valuable marker of esophageal carcinoma progression. CENP-H might be used as a valuable prognostic marker for esophageal carcinoma patients.

  12. Prognostic relevance of aberrant DNA methylation in g1 and g2 pancreatic neuroendocrine tumors.

    PubMed

    Stefanoli, Michele; La Rosa, Stefano; Sahnane, Nora; Romualdi, Chiara; Pastorino, Roberta; Marando, Alessandro; Capella, Carlo; Sessa, Fausto; Furlan, Daniela

    2014-01-01

    The occurrence and clinical relevance of DNA hypermethylation and global hypomethylation in pancreatic neuroendocrine tumours (PanNETs) are still unknown. We evaluated the frequency of both epigenetic alterations in PanNETs to assess the relationship between methylation profiles and chromosomal instability, tumour phenotypes and prognosis. In a well-characterized series of 56 sporadic G1 and G2 PanNETs, methylation-sensitive multiple ligation-dependent probe amplification was performed to assess hypermethylayion of 33 genes and copy number alterations (CNAs) of 53 chromosomal regions. Long interspersed nucleotide element-1 (LINE-1) hypomethylation was quantified by pyrosequencing. Unsupervised hierarchical clustering allowed to identify a subset of 22 PanNETs (39%) exhibiting high frequency of gene-specific methylation and low CNA percentages. This tumour cluster was significantly associated with stage IV (p = 0.04) and with poor prognosis in univariable analysis (p = 0.004). LINE-1 methylation levels in PanNETs were significantly lower than in normal samples (p < 0.01) and were approximately normally distributed. 12 tumours (21%) were highly hypomethylated, showing variable levels of CNA. Interestingly, only 5 PanNETs (9%) were observed to show simultaneously LINE-1 hypomethylation and high frequency of gene-specific methylation. LINE-1 hypomethylation was strongly correlated with advanced stage (p = 0.002) and with poor prognosis (p < 0.0001). In the multivariable analysis, low LINE-1 methylation status and methylation clusters were the only independent significant predictors of outcome (p = 0.034 and p = 0.029, respectively). The combination of global DNA hypomethylation and gene hypermethylation analyses may be useful to define distinct subsets of PanNETs. Both alterations are common in PanNETs and could be directly correlated with tumour progression. © 2014 S. Karger AG, Basel.

  13. Adventures in Uncertainty: An Empirical Investigation of the Use of a Taylor's Series Approximation for the Assessment of Sampling Errors in Educational Research.

    ERIC Educational Resources Information Center

    Wilson, Mark

    This study investigates the accuracy of the Woodruff-Causey technique for estimating sampling errors for complex statistics. The technique may be applied when data are collected by using multistage clustered samples. The technique was chosen for study because of its relevance to the correct use of multivariate analyses in educational survey…

  14. Combining markers with and without the limit of detection

    PubMed Central

    Dong, Ting; Liu, Catherine Chunling; Petricoin, Emanuel F.; Tang, Liansheng Larry

    2014-01-01

    In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients. PMID:24132938

  15. Tobacco use in popular movies during the past decade.

    PubMed

    Mekemson, C; Glik, D; Titus, K; Myerson, A; Shaivitz, A; Ang, A; Mitchell, S

    2004-12-01

    The top 50 commercially successful films released per year from 1991 to 2000 were content coded to assess trends in tobacco use over time and attributes of films predictive of higher smoking rates. This observational study used media content analysis methods to generate data about tobacco use depictions in films studied (n = 497). Films are the basic unit of analysis. Once films were coded and preliminary analysis completed, outcome data were transformed to approximate multivariate normality before being analysed with general linear models and longitudinal mixed method regression methods. Tobacco use per minute of film was the main outcome measure used. Predictor variables include attributes of films and actors. Tobacco use was defined as any cigarette, cigar, and chewing tobacco use as well as the display of smoke and cigarette paraphernalia such as ashtrays, brand names, or logos within frames of films reviewed. Smoking rates in the top films fluctuated yearly over the decade with an overall modest downward trend (p < 0.005), with the exception of R rated films where rates went up. The decrease in smoking rates found in films in the past decade is modest given extensive efforts to educate the entertainment industry on this issue over the past decade. Monitoring, education, advocacy, and policy change to bring tobacco depiction rates down further should continue.

  16. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  17. Assessment of methane emissions from oil and gas production pads using mobile measurements.

    PubMed

    Brantley, Halley L; Thoma, Eben D; Squier, William C; Guven, Birnur B; Lyon, David

    2014-12-16

    A new mobile methane emissions inspection approach, Other Test Method (OTM) 33A, was used to quantify short-term emission rates from 210 oil and gas production pads during eight two-week field studies in Texas, Colorado, and Wyoming from 2010 to 2013. Emission rates were log-normally distributed with geometric means and 95% confidence intervals (CIs) of 0.33 (0.23, 0.48), 0.14 (0.11, 0.19), and 0.59 (0.47, 0.74) g/s in the Barnett, Denver-Julesburg, and Pinedale basins, respectively. This study focused on sites with emission rates above 0.01 g/s and included short-term (i.e., condensate tank flashing) and maintenance-related emissions. The results fell within the upper ranges of the distributions observed in recent onsite direct measurement studies. Considering data across all basins, a multivariate linear regression was used to assess the relationship of methane emissions to well age, gas production, and hydrocarbon liquids (oil or condensate) production. Methane emissions were positively correlated with gas production, but only approximately 10% of the variation in emission rates was explained by variation in production levels. The weak correlation between emission and production rates may indicate that maintenance-related stochastic variables and design of production and control equipment are factors determining emissions.

  18. Weight Status and Behavioral Problems among Very Young Children in Chile.

    PubMed

    Kagawa, Rose M C; Fernald, Lia C H; Behrman, Jere R

    2016-01-01

    Our objective was to explore the association between weight status and behavioral problems in children before school age. We examined whether the association between weight status and behavioral problems varied by age and sex. This study used cross-sectional data from a nationally-representative sample of children and their families in Chile (N = 11,207). These children were selected using a cluster-stratified random sampling strategy. Data collection for this study took place in 2012 when the children were 1.5-6 years of age. We used multivariable analyses to examine the association between weight status and behavioral problems (assessed using the Child Behavior Checklist), while controlling for child's sex, indigenous status, birth weight, and months breastfed; primary caregiver's BMI and education level; and household wealth. Approximately 24% of our sample was overweight or obese. Overweight or obese girls showed more behavioral problems than normal weight girls at age 6 (β = 0.270 SD, 95% CI = 0.047, 0.493, P = 0.018). Among boys age 1 to 5 years, overweight/obesity was associated with a small reduction in internalizing behaviors (β = -0.09 SD, 95% CI = -0.163, -0.006, P = 0.034). Our data suggest that the associations between weight status and behavioral problems vary across age and sex.

  19. Feasibility Study on the Use of On-line Multivariate Statistical Process Control for Safeguards Applications in Natural Uranium Conversion Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd-Lively, Jennifer L

    2014-01-01

    The objective of this work was to determine the feasibility of using on-line multivariate statistical process control (MSPC) for safeguards applications in natural uranium conversion plants. Multivariate statistical process control is commonly used throughout industry for the detection of faults. For safeguards applications in uranium conversion plants, faults could include the diversion of intermediate products such as uranium dioxide, uranium tetrafluoride, and uranium hexafluoride. This study was limited to a 100 metric ton of uranium (MTU) per year natural uranium conversion plant (NUCP) using the wet solvent extraction method for the purification of uranium ore concentrate. A key component inmore » the multivariate statistical methodology is the Principal Component Analysis (PCA) approach for the analysis of data, development of the base case model, and evaluation of future operations. The PCA approach was implemented through the use of singular value decomposition of the data matrix where the data matrix represents normal operation of the plant. Component mole balances were used to model each of the process units in the NUCP. However, this approach could be applied to any data set. The monitoring framework developed in this research could be used to determine whether or not a diversion of material has occurred at an NUCP as part of an International Atomic Energy Agency (IAEA) safeguards system. This approach can be used to identify the key monitoring locations, as well as locations where monitoring is unimportant. Detection limits at the key monitoring locations can also be established using this technique. Several faulty scenarios were developed to test the monitoring framework after the base case or normal operating conditions of the PCA model were established. In all of the scenarios, the monitoring framework was able to detect the fault. Overall this study was successful at meeting the stated objective.« less

  20. MCMC Sampling for a Multilevel Model with Nonindependent Residuals within and between Cluster Units

    ERIC Educational Resources Information Center

    Browne, William; Goldstein, Harvey

    2010-01-01

    In this article, we discuss the effect of removing the independence assumptions between the residuals in two-level random effect models. We first consider removing the independence between the Level 2 residuals and instead assume that the vector of all residuals at the cluster level follows a general multivariate normal distribution. We…

  1. Feature combinations and the divergence criterion

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Mayekar, S. M.

    1976-01-01

    Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.

  2. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  3. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  4. Determining the Number of Component Clusters in the Standard Multivariate Normal Mixture Model Using Model-Selection Criteria.

    DTIC Science & Technology

    1983-06-16

    has been advocated by Gnanadesikan and 𔃾ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In

  5. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  6. Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes

    ERIC Educational Resources Information Center

    Mehta, Paras D.; Neale, Michael C.; Flay, Brian R.

    2004-01-01

    A didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the…

  7. Undiagnosed Small Fiber Polyneuropathy: Is it a Component of Gulf War Illness?

    DTIC Science & Technology

    2012-07-01

    laboratory. After informed consent, a site (10 cm above the ankle ) is anesthetized and one or two 2- or 3mm diameter skin punches are removed using...of the scope of this study, the biopsy results of the youngsters anchor the lower end of the normal biopsy curve from which the multivariate

  8. Multiple Imputation of Item Scores in Test and Questionnaire Data, and Influence on Psychometric Results

    ERIC Educational Resources Information Center

    van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas

    2007-01-01

    The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…

  9. SPSS Syntax for Missing Value Imputation in Test and Questionnaire Data

    ERIC Educational Resources Information Center

    van Ginkel, Joost R.; van der Ark, L. Andries

    2005-01-01

    A well-known problem in the analysis of test and questionnaire data is that some item scores may be missing. Advanced methods for the imputation of missing data are available, such as multiple imputation under the multivariate normal model and imputation under the saturated logistic model (Schafer, 1997). Accompanying software was made available…

  10. Trisomy 18 mosaicism in a 15-year-old boy with normal intelligence and short stature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    We report a 15-year-old boy with mosaicism for trisomy 18 and normal intelligence. Approximately 50% of his leukocytes are trisomic. This patient represents the sixth report of an individual with trisomy 18 mosaicism and normal intelligence. Those individuals with trisomy 18 mosaicism and normal intelligence need to be advised of increased risks for offspring with chromosome abnormalities and offered the option of prenatal diagnosis for cytogenetic anomalies. 6 refs.

  11. Wing motion measurement and aerodynamics of hovering true hoverflies.

    PubMed

    Mou, Xiao Lei; Liu, Yan Peng; Sun, Mao

    2011-09-01

    Most hovering insects flap their wings in a horizontal plane (body having a large angle from the horizontal), called `normal hovering'. But some of the best hoverers, e.g. true hoverflies, hover with an inclined stroke plane (body being approximately horizontal). In the present paper, wing and body kinematics of four freely hovering true hoverflies were measured using three-dimensional high-speed video. The measured wing kinematics was used in a Navier-Stokes solver to compute the aerodynamic forces of the insects. The stroke amplitude of the hoverflies was relatively small, ranging from 65 to 85 deg, compared with that of normal hovering. The angle of attack in the downstroke (∼50 deg) was much larger that in the upstroke (∼20 deg), unlike normal-hovering insects, whose downstroke and upstroke angles of attack are not very different. The major part of the weight-supporting force (approximately 86%) was produced in the downstroke and it was contributed by both the lift and the drag of the wing, unlike the normal-hovering case in which the weight-supporting force is approximately equally contributed by the two half-strokes and the lift principle is mainly used to produce the force. The mass-specific power was 38.59-46.3 and 27.5-35.4 W kg(-1) in the cases of 0 and 100% elastic energy storage, respectively. Comparisons with previously published results of a normal-hovering true hoverfly and with results obtained by artificially making the insects' stroke planes horizontal show that for the true hoverflies, the power requirement for inclined stroke-plane hover is only a little (<10%) larger than that of normal hovering.

  12. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    PubMed

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  13. Predictors of obesity in Michigan Operating Engineers.

    PubMed

    Duffy, Sonia A; Cohen, Kathleen A; Choi, Seung Hee; McCullagh, Marjorie C; Noonan, Devon

    2012-06-01

    Blue collar workers are at risk for obesity. Little is known about obesity in Operating Engineers, a group of blue collar workers, who operate heavy earth-moving equipment in road building and construction. Therefore, 498 Operating Engineers in Michigan were recruited to participate in a cross-sectional survey to determine variables related to obesity in this group. Bivariate and multivariate analyses were conducted to determine personal, psychological, and behavioral factors predicting obesity. Approximately 45% of the Operating Engineers screened positive for obesity, and another 40% were overweight. Multivariate analysis revealed that younger age, male sex, higher numbers of self-reported co-morbidities, not smoking, and low physical activity levels were significantly associated with obesity among Operating Engineers. Operating Engineers are significantly at risk for obesity, and workplace interventions are needed to address this problem.

  14. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Normal Gravity Fields and Equipotential Ellipsoids of Small Objects in the Solar System: A Closed-form Solution in Ellipsoidal Harmonics up to the Second Degree

    NASA Astrophysics Data System (ADS)

    Hu, Xuanyu

    2017-11-01

    We propose a definition for the normal gravity fields and normal figures of small objects in the solar system, such as asteroids, cometary nuclei, and planetary moons. Their gravity fields are represented as series of ellipsoidal harmonics, ensuring more robust field evaluation in the proximity of an arbitrary, convex shape than using spherical harmonics. The normal gravity field, approximate to the actual field, can be described by a finite series of three terms, that is, degree zero, and the zonal and sectoral harmonics of degree two. The normal gravity is that of an equipotential ellipsoid, defined as the normal ellipsoid of the body. The normal ellipsoid may be distinct from the actual figure. We present a rationale for specifying and a numerical method for determining the parameters of the normal ellipsoid. The definition presented here generalizes the convention of the normal spheroid of a large, hydrostatically equilibrated planet, such as Earth. Modeling the normal gravity and the normal ellipsoid is relevant to studying the formation of the “rubble pile” objects, which may have been accreted, or reorganized after disruption, under self-gravitation. While the proposed methodology applies to convex, approximately ellipsoidal objects, those bi-lobed objects can be treated as contact binaries comprising individual convex subunits. We study an exemplary case of the nearly ellipsoidal Martian moon, Phobos, subject to strong tidal influence in its present orbit around Mars. The results allude to the formation of Phobos via gravitational accretion at some further distance from Mars.

  16. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  17. Analysis of vector wind change with respect to time for Cape Kennedy, Florida: Wind aloft profile change vs. time, phase 1

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1977-01-01

    Wind vector change with respect to time at Cape Kennedy, Florida, is examined according to the theory of multivariate normality. The joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from fifteen years of twice daily Rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, the joint distribution of wind component changes is bivariate normal, and the modulus of vector wind change is Rayleigh, has been tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from one to five hours, calculated from Jimsphere data, are presented.

  18. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  19. SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *

    PubMed Central

    Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.

    2014-01-01

    The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844

  20. Identification of multivariable nonlinear systems in the presence of colored noises using iterative hierarchical least squares algorithm.

    PubMed

    Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam

    2014-07-01

    This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions.

    PubMed

    Keefe, Douglas H; Archer, Kelly L; Schmid, Kendra K; Fitzpatrick, Denis F; Feeney, M Patrick; Hunter, Lisa L

    2017-10-01

    Otosclerosis is a progressive middle-ear disease that affects conductive transmission through the middle ear. Ear-canal acoustic tests may be useful in the diagnosis of conductive disorders. This study addressed the degree to which results from a battery of ear-canal tests, which include wideband reflectance, acoustic stapedius muscle reflex threshold (ASRT), and transient evoked otoacoustic emissions (TEOAEs), were effective in quantifying a risk of otosclerosis and in evaluating middle-ear function in ears after surgical intervention for otosclerosis. To evaluate the ability of the test battery to classify ears as normal or otosclerotic, measure the accuracy of reflectance in classifying ears as normal or otosclerotic, and evaluate the similarity of responses in normal ears compared with ears after surgical intervention for otosclerosis. A quasi-experimental cross-sectional study incorporating case control was used. Three groups were studied: one diagnosed with otosclerosis before corrective surgery, a group that received corrective surgery for otosclerosis, and a control group. The test groups included 23 ears (13 right and 10 left) with normal hearing from 16 participants (4 male and 12 female), 12 ears (7 right and 5 left) diagnosed with otosclerosis from 9 participants (3 male and 6 female), and 13 ears (4 right and 9 left) after surgical intervention from 10 participants (2 male and 8 female). Participants received audiometric evaluations and clinical immittance testing. Experimental tests performed included ASRT tests with wideband reference signal (0.25-8 kHz), reflectance tests (0.25-8 kHz), which were parameterized by absorbance and group delay at ambient pressure and at swept tympanometric pressures, and TEOAE tests using chirp stimuli (1-8 kHz). ASRTs were measured in ipsilateral and contralateral conditions using tonal and broadband noise activators. Experimental ASRT tests were based on the difference in wideband-absorbed sound power before and after presenting the activator. Diagnostic accuracy to classify ears as otosclerotic or normal was quantified by the area under the receiver operating characteristic curve (AUC) for univariate and multivariate reflectance tests. The multivariate predictor used a small number of input reflectance variables, each having a large AUC, in a principal components analysis to create independent variables and followed by a logistic regression procedure to classify the test ears. Relative to the results in normal ears, diagnosed otosclerosis ears more frequently showed absent TEOAEs and ASRTs, reduced ambient absorbance at 4 kHz, and a different pattern of tympanometric absorbance and group delay (absorbance increased at 2.8 kHz at the positive-pressure tail and decreased at 0.7-1 kHz at the peak pressure, whereas group delay decreased at positive and negative-pressure tails from 0.35-0.7 kHz, and at 2.8-4 kHz at positive-pressure tail). Using a multivariate predictor with three reflectance variables, tympanometric reflectance (AUC = 0.95) was more accurate than ambient reflectance (AUC = 0.88) in classifying ears as normal or otosclerotic. Reflectance provides a middle-ear test that is sensitive to classifying ears as otosclerotic or normal, which may be useful in clinical applications. American Academy of Audiology

  2. Multivariate approximation methods and applications to geophysics and geodesy

    NASA Technical Reports Server (NTRS)

    Munteanu, M. J.

    1979-01-01

    The first report in a series is presented which is intended to be written by the author with the purpose of treating a class of approximation methods of functions in one and several variables and ways of applying them to geophysics and geodesy. The first report is divided in three parts and is devoted to the presentation of the mathematical theory and formulas. Various optimal ways of representing functions in one and several variables and the associated error when information is had about the function such as satellite data of different kinds are discussed. The framework chosen is Hilbert spaces. Experiments were performed on satellite altimeter data and on satellite to satellite tracking data.

  3. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  4. Evaluation of biomolecular distributions in rat brain tissues by means of ToF-SIMS using a continuous beam of Ar clusters.

    PubMed

    Nakano, Shusuke; Yokoyama, Yuta; Aoyagi, Satoka; Himi, Naoyuki; Fletcher, John S; Lockyer, Nicholas P; Henderson, Alex; Vickerman, John C

    2016-06-08

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) provides detailed chemical structure information and high spatial resolution images. Therefore, ToF-SIMS is useful for studying biological phenomena such as ischemia. In this study, in order to evaluate cerebral microinfarction, the distribution of biomolecules generated by ischemia was measured with ToF-SIMS. ToF-SIMS data sets were analyzed by means of multivariate analysis for interpreting complex samples containing unknown information and to obtain biomolecular mapping indicated by fragment ions from the target biomolecules. Using conventional ToF-SIMS (primary ion source: Bi cluster ion), it is difficult to detect secondary ions beyond approximately 1000 u. Moreover, the intensity of secondary ions related to biomolecules is not always high enough for imaging because of low concentration even if the masses are lower than 1000 u. However, for the observation of biomolecular distributions in tissues, it is important to detect low amounts of biological molecules from a particular area of tissue. Rat brain tissue samples were measured with ToF-SIMS (J105, Ionoptika, Ltd., Chandlers Ford, UK), using a continuous beam of Ar clusters as a primary ion source. ToF-SIMS with Ar clusters efficiently detects secondary ions related to biomolecules and larger molecules. Molecules detected by ToF-SIMS were examined by analyzing ToF-SIMS data using multivariate analysis. Microspheres (45 μm diameter) were injected into the rat unilateral internal carotid artery (MS rat) to cause cerebral microinfarction. The rat brain was sliced and then measured with ToF-SIMS. The brain samples of a normal rat and the MS rat were examined to find specific secondary ions related to important biomolecules, and then the difference between them was investigated. Finally, specific secondary ions were found around vessels incorporating microspheres in the MS rat. The results suggest that important biomolecules related to cerebral microinfarction can be detected by ToF-SIMS.

  5. Prevalence of kidney stones in the United States.

    PubMed

    Scales, Charles D; Smith, Alexandria C; Hanley, Janet M; Saigal, Christopher S

    2012-07-01

    The last nationally representative assessment of kidney stone prevalence in the United States occurred in 1994. After a 13-yr hiatus, the National Health and Nutrition Examination Survey (NHANES) reinitiated data collection regarding kidney stone history. Describe the current prevalence of stone disease in the United States, and identify factors associated with a history of kidney stones. A cross-sectional analysis of responses to the 2007-2010 NHANES (n=12 110). Self-reported history of kidney stones. Percent prevalence was calculated and multivariable models were used to identify factors associated with a history of kidney stones. The prevalence of kidney stones was 8.8% (95% confidence interval [CI], 8.1-9.5). Among men, the prevalence of stones was 10.6% (95% CI, 9.4-11.9), compared with 7.1% (95% CI, 6.4-7.8) among women. Kidney stones were more common among obese than normal-weight individuals (11.2% [95% CI, 10.0-12.3] compared with 6.1% [95% CI, 4.8-7.4], respectively; p<0.001). Black, non-Hispanic and Hispanic individuals were less likely to report a history of stone disease than were white, non-Hispanic individuals (black, non-Hispanic: odds ratio [OR]: 0.37 [95% CI, 0.28-0.49], p<0.001; Hispanic: OR: 0.60 [95% CI, 0.49-0.73], p<0.001). Obesity and diabetes were strongly associated with a history of kidney stones in multivariable models. The cross-sectional survey design limits causal inference regarding potential risk factors for kidney stones. Kidney stones affect approximately 1 in 11 people in the United States. These data represent a marked increase in stone disease compared with the NHANES III cohort, particularly in black, non-Hispanic and Hispanic individuals. Diet and lifestyle factors likely play an important role in the changing epidemiology of kidney stones. Published by Elsevier B.V.

  6. Do Intermediate Radiation Doses Contribute to Late Rectal Toxicity? An Analysis of Data From Radiation Therapy Oncology Group Protocol 94-06

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Susan L., E-mail: sltucker@mdanderson.org; Dong, Lei; Michalski, Jeff M.

    2012-10-01

    Purpose: To investigate whether the volumes of rectum exposed to intermediate doses, from 30 to 50 Gy, contribute to the risk of Grade {>=}2 late rectal toxicity among patients with prostate cancer receiving radiotherapy. Methods and Materials: Data from 1009 patients treated on Radiation Therapy Oncology Group protocol 94-06 were analyzed using three approaches. First, the contribution of intermediate doses to a previously published fit of the Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) model was determined. Next, the extent to which intermediate doses provide additional risk information, after taking the LKB model into account, was investigated. Third, the proportionmore » of rectum receiving doses higher than a threshold, VDose, was computed for doses ranging from 5 to 85 Gy, and a multivariate Cox proportional hazards model was used to determine which of these parameters were significantly associated with time to Grade {>=}2 late rectal toxicity. Results: Doses <60 Gy had no detectable impact on the fit of the LKB model, as expected on the basis of the small estimate of the volume parameter (n = 0.077). Furthermore, there was no detectable difference in late rectal toxicity among cohorts with similar risk estimates from the LKB model but with different volumes of rectum exposed to intermediate doses. The multivariate Cox proportional hazards model selected V75 as the only value of VDose significantly associated with late rectal toxicity. Conclusions: There is no evidence from these data that intermediate doses influence the risk of Grade {>=}2 late rectal toxicity. Instead, the critical doses for this endpoint seem to be {>=}75 Gy. It is hypothesized that cases of Grade {>=}2 late rectal toxicity occurring among patients with V75 less than approximately 12% may be due to a 'background' level of risk, likely due mainly to biological factors.« less

  7. Combined effect of alcohol consumption and lifestyle behaviors on risk of type 2 diabetes.

    PubMed

    Joosten, Michel M; Grobbee, Diederick E; van der A, Daphne L; Verschuren, W M Monique; Hendriks, Henk F J; Beulens, Joline W J

    2010-06-01

    It has been suggested that the inverse association between alcohol and type 2 diabetes could be explained by moderate drinkers' healthier lifestyles. We studied whether moderate alcohol consumption is associated with a lower risk of type 2 diabetes in adults with combined low-risk lifestyle behaviors. We prospectively examined 35,625 adults of the Dutch European Prospective Investigation into Cancer and Nutrition (EPIC-NL) cohort aged 20-70 y, who were free of diabetes, cardiovascular disease, and cancer at baseline (1993-1997). In addition to moderate alcohol consumption (women: 5.0-14.9 g/d; men: 5.0-29.9 g/d), we defined low-risk categories of 4 lifestyle behaviors: optimal weight [body mass index (in kg/m(2)) <25], physically active (> or =30 min of physical activity/d), current nonsmoker, and a healthy diet [upper 2 quintiles of the Dietary Approaches to Stop Hypertension (DASH) diet]. During a median of 10.3 y, we identified 796 incident cases of type 2 diabetes. Compared with teetotalers, hazard ratios of moderate alcohol consumers for risk of type 2 diabetes in low-risk lifestyle strata after multivariable adjustments were 0.35 (95% CI: 0.17, 0.72) when of a normal weight, 0.65 (95% CI: 0.46, 0.91) when physically active, 0.54 (95% CI: 0.41, 0.71) when nonsmoking, and 0.57 (95% CI: 0.39, 0.84) when consuming a healthy diet. When > or =3 low-risk lifestyle behaviors were combined, the hazard ratio for incidence of type 2 diabetes in moderate alcohol consumers after multivariable adjustments was 0.56 (95% CI: 0.32, 1.00). In subjects already at lower risk of type 2 diabetes on the basis of multiple low-risk lifestyle behaviors, moderate alcohol consumption was associated with an approximately 40% lower risk compared with abstention.

  8. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  9. The risk of passive regurgitation during general anaesthesia in a population of referred dogs in the UK.

    PubMed

    Lamata, Cecilia; Loughton, Verity; Jones, Monie; Alibhai, Hatim; Armitage-Chan, Elizabeth; Walsh, Karen; Brodbelt, David

    2012-05-01

    To evaluate the risk of passive regurgitation during anaesthesia, and to identify major factors associated with this in dogs attending the Queen Mother Hospital for Animals (QMHA), the Royal Veterinary College. A case-control study nested within the cohort of dogs undergoing anaesthesia with inhalation agents. All dogs undergoing general anaesthesia at the referral hospital between October 2006 and September 2008 (4271 cases). All dogs anaesthetized at the QMHA during the study period were included. Regurgitating cases were defined as dogs for which reflux material was observed at the external nares or in the mouth, either during anaesthesia or before return to normal consciousness immediately after general anaesthesia. The risk of regurgitation was estimated and risk factors for regurgitation were evaluated with multivariable logistic regression (p < 0.05). The overall risk of regurgitation was 0.96% (41 cases out of 4271 anaesthetics, 95% confidence interval [95% CI] 0.67-1.25%). Exclusion of animals where pre-existing disease was considered a contributing factor to regurgitation (n = 14) resulted in a risk of passive regurgitation of 0.63% (27 cases of 4257 anaesthetics, 95% CI 0.40-0.87%). In the multivariable logistic regression model, procedure and patient weight were significantly associated with regurgitation. Dogs undergoing orthopaedic surgery were 26.7 times more likely to regurgitate compared to dogs undergoing only diagnostic procedures. Dogs weighing more than 40 kg were approximately five times more likely to regurgitate than those weighing <20 kg. This study highlights the rare but important occurrence of perioperative regurgitation and identifies that dogs undergoing orthopaedic procedures, and those weighing more than 40 kg, are particularly at risk. Further work is required to evaluate the reasons for these observations. © 2012 The Authors. Veterinary Anaesthesia and Analgesia. © 2012 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesiologists.

  10. Laparoscopic liver surgery: towards a day-case management.

    PubMed

    Tranchart, Hadrien; Fuks, David; Lainas, Panagiotis; Gaillard, Martin; Dagher, Ibrahim; Gayet, Brice

    2017-12-01

    Ambulatory surgery (AS) is a contemporary subject of interest. The feasibility and safety of AS for solid abdominal organs are still dubious. In the present study, we aimed at defining potential surgical criteria for AS by analyzing a large database of patients who underwent laparoscopic liver surgery (LLS) in two French expert centers. This study was performed using prospectively filled databases including patients that underwent pure LLS between 1998 and 2015. Patients whose perioperative medical characteristics (ASA score <3, no associated extra-hepatic procedure, surgical duration ≤180 min, blood loss ≤300 mL, no intraoperative anesthesiological or surgical complication, no postoperative drainage) were potentially adapted for ambulatory LLS were included in the analysis. In order to determine the risk factors for postoperative complications, multivariate analysis was carried out. During the study period, pure LLS was performed in 994 patients. After preoperative and intraoperative characteristics screening, 174 (17.5%) patients were considered for the final analysis. Lesions (benign (46%) and liver metastases (43%)) were predominantly single with a mean size of 37 ± 32 mm in an underlying normal or steatotic liver parenchyma (94.8%). The vast majority of LLS performed were single procedures including wedge resections and liver cyst unroofing or left lateral sectionectomies (74%). The global morbidity rate was 14% and six patients presented a major complication (Dindo-Clavien ≥III). The mean length of stay was 5 ± 4 days. Multivariate analysis showed that major hepatectomy [OR 29.04 (2.26-37.19); P = 0.01] and resection of tumors localized in central segments [OR 41.24 (1.08-156.47); P = 0.04] were independent predictors of postoperative morbidity. In experienced teams, approximately 7% of highly selected patients requiring laparoscopic hepatic surgery (wedge resection, liver cyst unroofing, or left lateral sectionectomy) could benefit from ambulatory surgery management.

  11. Nonspecific ST-T changes associated with unsatisfactory blood pressure control among adults with hypertension in China: Evidence from the CSPTT study.

    PubMed

    Bao, Huihui; Cai, Huaxiu; Zhao, Yan; Huang, Xiao; Fan, Fangfang; Zhang, Chunyan; Li, Juxiang; Chen, Jing; Hong, Kui; Li, Ping; Wu, Yanqing; Wu, Qinhua; Wang, Binyan; Xu, Xiping; Li, Yigang; Huo, Yong; Cheng, Xiaoshu

    2017-03-01

    Nonspecific ST-segment and T-wave (ST-T) changes represent one of the most prevalent electrocardiographic abnormalities in hypertensive patients. However, a limited number of studies have investigated the association between nonspecific ST-T changes and unsatisfactory blood pressure (BP) control in adults with hypertension.The study population comprised 15,038 hypertensive patients, who were selected from 20,702 participants in the China Stroke Primary Prevention Trial. The subjects were examined with electrocardiogram test at the initial visit in order to monitor baseline heart activity. According to the results of the electrocardiogram (defined by Minnesota coding), the subjects were divided into 2 groups: ST-T abnormal and ST-T normal. Unsatisfactory BP control was defined as systolic BP ≥140 mm Hg or diastolic BP ≥90 mm Hg following antihypertensive treatment during the 4.5-year follow-up period. Multivariate analysis was used to analyze the association between nonspecific ST-T abnormalities and unsatisfactory BP control.Nonspecific ST-T changes were common in hypertensive adults (approximately 8.5% in the study), and more prevalent in women (10.3%) and diabetic patients (13.9%). The unsatisfactory BP control rate was high in the total population (47.0%), notably in the ST-T abnormal group (55.5%). The nonspecific ST-T abnormal group exhibited a significantly greater rate of unsatisfactory BP control (odds ratio [OR] 1.20, 95% confidence interval [CI] [1.06, 1.36], P = 0.005]), independent of traditional risk factors, as demonstrated by multivariate regression analysis. Notable differences were further observed in male subjects (OR 1.51, 95% CI [1.17, 1.94], P = 0.002) and in patients with comorbid diabetes (OR 1.47, 95% CI [1.04, 2.07], P = 0.029).Greater rates of unsatisfactory BP control in hypertensive patients with electrocardiographic nonspecific ST-T abnormalities were observed, notably in the subcategories of the male subjects and the diabetic patients.

  12. Boltzmann-conserving classical dynamics in quantum time-correlation functions: "Matsubara dynamics".

    PubMed

    Hele, Timothy J H; Willatt, Michael J; Muolo, Andrea; Althorpe, Stuart C

    2015-04-07

    We show that a single change in the derivation of the linearized semiclassical-initial value representation (LSC-IVR or "classical Wigner approximation") results in a classical dynamics which conserves the quantum Boltzmann distribution. We rederive the (standard) LSC-IVR approach by writing the (exact) quantum time-correlation function in terms of the normal modes of a free ring-polymer (i.e., a discrete imaginary-time Feynman path), taking the limit that the number of polymer beads N → ∞, such that the lowest normal-mode frequencies take their "Matsubara" values. The change we propose is to truncate the quantum Liouvillian, not explicitly in powers of ħ(2) at ħ(0) (which gives back the standard LSC-IVR approximation), but in the normal-mode derivatives corresponding to the lowest Matsubara frequencies. The resulting "Matsubara" dynamics is inherently classical (since all terms O(ħ(2)) disappear from the Matsubara Liouvillian in the limit N → ∞) and conserves the quantum Boltzmann distribution because the Matsubara Hamiltonian is symmetric with respect to imaginary-time translation. Numerical tests show that the Matsubara approximation to the quantum time-correlation function converges with respect to the number of modes and gives better agreement than LSC-IVR with the exact quantum result. Matsubara dynamics is too computationally expensive to be applied to complex systems, but its further approximation may lead to practical methods.

  13. Exercise-induced muscle glucose uptake in mice with graded, muscle-specific GLUT-4 deletion.

    PubMed

    Howlett, Kirsten F; Andrikopoulos, Sofianos; Proietto, Joseph; Hargreaves, Mark

    2013-08-01

    To investigate the importance of the glucose transporter GLUT-4 for muscle glucose uptake during exercise, transgenic mice with skeletal muscle GLUT-4 expression approximately 30-60% of normal (CON) and approximately 5-10% of normal (KO) were generated using the Cre/Lox system and compared with wild-type (WT) mice during approximately 40 min of treadmill running (KO: 37.7 ± 1.3 min; WT: 40 min; CON: 40 min, P = 0.18). In WT and CON animals, exercise resulted in an overall increase in muscle glucose uptake. More specifically, glucose uptake was increased in red gastrocnemius of WT mice and in the soleus and red gastrocnemius of CON mice. In contrast, the exercise-induced increase in muscle glucose uptake in all muscles was completely abolished in KO mice. Muscle glucose uptake increased during exercise in both red and white quadriceps of WT mice, while the small increases in CON mice were not statistically significant. In KO mice, there was no change at all in quadriceps muscle glucose uptake. No differences in muscle glycogen use during exercise were observed between any of the groups. However, there was a significant increase in plasma glucose levels after exercise in KO mice. The results of this study demonstrated that a reduction in skeletal muscle GLUT-4 expression to approximately 10% of normal levels completely abolished the exercise-induced increase in muscle glucose uptake.

  14. Relationship Between Habitual Exercise and Performance on Cardiopulmonary Exercise Testing Differs Between Children With Single and Biventricular Circulations.

    PubMed

    O'Byrne, Michael L; Desai, Sanyukta; Lane, Megan; McBride, Michael; Paridon, Stephen; Goldmuntz, Elizabeth

    2017-03-01

    Increasing habitual exercise has been associated with improved cardiopulmonary exercise testing (CPET) performance, specifically maximal oxygen consumption in children with operatively corrected congenital heart disease. This has not been studied in children following Fontan palliation, a population in whom CPET performance is dramatically diminished. A single-center cross-sectional study with prospective and retrospective data collection was performed that assessed habitual exercise preceding a clinically indicated CPET in children and adolescents with Fontan palliation, transposition of the great arteries following arterial switch operation (TGA), and normal cardiac anatomy without prior operation. Data from contemporaneous clinical reports and imaging studies were collected. The association between percent predicted VO 2max and habitual exercise duration adjusted for known covariates was tested. A total of 175 subjects (75 post-Fontan, 20 with TGA, and 80 with normal cardiac anatomy) were enrolled. VO 2max was lower in the Fontan group than patients with normal cardiac anatomy (p < 0.0001) or TGA (p < 0.0001). In Fontan subjects, both univariate and multivariate analysis failed to demonstrate a significant association between habitual exercise and VO 2max (p = 0.6), in sharp contrast to cardiac normal subjects. In multivariate analysis, increasing age was the only independent risk factor associated with decreasing VO 2max in the Fontan group (p = 0.003). Habitual exercise was not associated with VO 2max in subjects with a Fontan as compared to biventricular circulation. Further research is necessary to understand why their habitual exercise is ineffective and/or what aspects of the Fontan circulation disrupt this association.

  15. Relationship between habitual exercise and performance on cardio-pulmonary exercise testing differs between children with single and bi-ventricular circulation

    PubMed Central

    O'Byrne, Michael L; Desai, Sanyukta; Lane, Megan; McBride, Michael; Paridon, Stephen; Goldmuntz, Elizabeth

    2016-01-01

    Background Increasing habitual exercise has been associated with improved cardiopulmonary exercise testing (CPET) performance, specifically maximal oxygen consumption in children with operatively corrected congenital heart disease. This has not been studied in children following Fontan palliation, a population in whom CPET performance is dramatically diminished. Methods A single-center cross-sectional study with prospective and retrospective data collection was performed that assessed habitual exercise preceding a clinically indicated CPET in children and adolescents with Fontan palliation, transposition of the great arteries following arterial switch operation (TGA), and normal cardiac anatomy without prior operation. Data from contemporaneous clinical reports and imaging studies were collected. The association between percent predicted VO2max and habitual exercise duration adjusted for known covariates was tested. Results A total of 175 subjects (75 post Fontan, 20 with TGA, and 80 with normal cardiac anatomy) were enrolled. VO2max was lower in the Fontan group than patients with normal cardiac anatomy (p<0.0001) or TGA (p<0.0001). In Fontan subjects, both univariate and multivariate analysis failed to demonstrate a significant association between habitual exercise and VO2max (p=0.6), in sharp contrast to cardiac normal subjects. In multivariate analysis, increasing age was the only independent risk factor associated with decreasing VO2max in the Fontan group (p=0.003). Discussion Habitual exercise was not associated with VO2max in subjects with a Fontan as compared to biventricular circulation. Further research is necessary to understand why their habitual exercise is ineffective and/or what aspects of the Fontan circulation disrupt this association. PMID:27878634

  16. Optimizing Functional Network Representation of Multivariate Time Series

    NASA Astrophysics Data System (ADS)

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-09-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.

  17. Optimizing Functional Network Representation of Multivariate Time Series

    PubMed Central

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-01-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051

  18. Longitudinal assessment of treatment effects on pulmonary ventilation using 1H/3He MRI multivariate templates

    NASA Astrophysics Data System (ADS)

    Tustison, Nicholas J.; Contrella, Benjamin; Altes, Talissa A.; Avants, Brian B.; de Lange, Eduard E.; Mugler, John P.

    2013-03-01

    The utitlity of pulmonary functional imaging techniques, such as hyperpolarized 3He MRI, has encouraged their inclusion in research studies for longitudinal assessment of disease progression and the study of treatment effects. We present methodology for performing voxelwise statistical analysis of ventilation maps derived from hyper­ polarized 3He MRI which incorporates multivariate template construction using simultaneous acquisition of IH and 3He images. Additional processing steps include intensity normalization, bias correction, 4-D longitudinal segmentation, and generation of expected ventilation maps prior to voxelwise regression analysis. Analysis is demonstrated on a cohort of eight individuals with diagnosed cystic fibrosis (CF) undergoing treatment imaged five times every two weeks with a prescribed treatment schedule.

  19. Prevalence and predictors of thyroid functional abnormalities in newly diagnosed AL amyloidosis.

    PubMed

    Muchtar, E; Dean, D S; Dispenzieri, A; Dingli, D; Buadi, F K; Lacy, M Q; Hayman, S R; Kapoor, P; Leung, N; Russell, S; Lust, J A; Lin, Yi; Warsame, R; Gonsalves, W; Kourelis, T V; Go, R S; Chakraborty, R; Zeldenrust, S; Kyle, R A; Rajkumar, S Vincent; Kumar, S K; Gertz, M A

    2017-06-01

    Data on the effect of systemic immunoglobulin light chain amyloidosis (AL amyloidosis) on thyroid function are limited. To assess the prevalence of hypothyroidism in AL amyloidosis patients and determine its predictors. 1142 newly diagnosed AL amyloidosis patients were grouped based on the thyroid-stimulating hormone (TSH) measurement at diagnosis: hypothyroid group (TSH above upper normal reference; >5 mIU L -1 ; n = 217, 19% of study participants) and euthyroid group (n = 925, 81%). Predictors for hypothyroidism were assessed in a binary multivariate model. Survival between groups was compared using the log-rank test and a multivariate analysis. Patients with hypothyroidism were older, more likely to present with renal and hepatic involvement and had a higher light chain burden compared to patients in the euthyroid group. Higher proteinuria in patients with renal involvement and lower albumin in patients with hepatic involvement were associated with hypothyroidism. In a binary logistic regression model, age ≥65 years, female sex, renal involvement, hepatic involvement, kappa light chain restriction and amiodarone use were independently associated with hypothyroidism. Ninety-three per cent of patients in the hypothyroid group with free thyroxine measurement had normal values, consistent with subclinical hypothyroidism. Patients in the hypothyroid group had a shorter survival compared to patients in the euthyroid group (4-year survival 36% vs 43%; P = 0.008), a difference that was maintained in a multivariate analysis. A significant proportion of patients with AL amyloidosis present with hypothyroidism, predominantly subclinical, which carries a survival disadvantage. Routine assessment of TSH in these patients is warranted. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  20. Using cystoscopy to segment bladder tumors with a multivariate approach in different color spaces.

    PubMed

    Freitas, Nuno R; Vieira, Pedro M; Lima, Estevao; Lima, Carlos S

    2017-07-01

    Nowadays the diagnosis of bladder lesions relies upon cystoscopy examination and depends on the interpreter's experience. State of the art of bladder tumor identification are based on 3D reconstruction, using CT images (Virtual Cystoscopy) or images where the structures are exalted with the use of pigmentation, but none uses white light cystoscopy images. An initial attempt to automatically identify tumoral tissue was already developed by the authors and this paper will develop this idea. Traditional cystoscopy images processing has a huge potential to improve early tumor detection and allows a more effective treatment. In this paper is described a multivariate approach to do segmentation of bladder cystoscopy images, that will be used to automatically detect and improve physician diagnose. Each region can be assumed as a normal distribution with specific parameters, leading to the assumption that the distribution of intensities is a Gaussian Mixture Model (GMM). Region of high grade and low grade tumors, usually appears with higher intensity than normal regions. This paper proposes a Maximum a Posteriori (MAP) approach based on pixel intensities read simultaneously in different color channels from RGB, HSV and CIELab color spaces. The Expectation-Maximization (EM) algorithm is used to estimate the best multivariate GMM parameters. Experimental results show that the proposed method does bladder tumor segmentation into two classes in a more efficient way in RGB even in cases where the tumor shape is not well defined. Results also show that the elimination of component L from CIELab color space does not allow definition of the tumor shape.

  1. Multivariable normal-tissue complication modeling of acute esophageal toxicity in advanced stage non-small cell lung cancer patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Wijsman, Robin; Dankers, Frank; Troost, Esther G C; Hoffmann, Aswin L; van der Heijden, Erik H F M; de Geus-Oei, Lioe-Fee; Bussink, Johan

    2015-10-01

    The majority of normal-tissue complication probability (NTCP) models for acute esophageal toxicity (AET) in advanced stage non-small cell lung cancer (AS-NSCLC) patients treated with (chemo-)radiotherapy are based on three-dimensional conformal radiotherapy (3D-CRT). Due to distinct dosimetric characteristics of intensity-modulated radiation therapy (IMRT), 3D-CRT based models need revision. We established a multivariable NTCP model for AET in 149 AS-NSCLC patients undergoing IMRT. An established model selection procedure was used to develop an NTCP model for Grade ⩾2 AET (53 patients) including clinical and esophageal dose-volume histogram parameters. The NTCP model predicted an increased risk of Grade ⩾2 AET in case of: concurrent chemoradiotherapy (CCR) [adjusted odds ratio (OR) 14.08, 95% confidence interval (CI) 4.70-42.19; p<0.001], increasing mean esophageal dose [Dmean; OR 1.12 per Gy increase, 95% CI 1.06-1.19; p<0.001], female patients (OR 3.33, 95% CI 1.36-8.17; p=0.008), and ⩾cT3 (OR 2.7, 95% CI 1.12-6.50; p=0.026). The AUC was 0.82 and the model showed good calibration. A multivariable NTCP model including CCR, Dmean, clinical tumor stage and gender predicts Grade ⩾2 AET after IMRT for AS-NSCLC. Prior to clinical introduction, the model needs validation in an independent patient cohort. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amos, D.E.

    1977-04-01

    A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.

  3. A Noncentral "t" Regression Model for Meta-Analysis

    ERIC Educational Resources Information Center

    Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi

    2010-01-01

    In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…

  4. Ontogenetic, gravity-dependent development of rat soleus muscle

    NASA Technical Reports Server (NTRS)

    Ohira, Y.; Tanaka, T.; Yoshinaga, T.; Kawano, F.; Nomura, T.; Nonaka, I.; Allen, D. L.; Roy, R. R.; Edgerton, V. R.

    2001-01-01

    We tested the hypothesis that rat soleus muscle fiber growth and changes in myosin phenotype during the postnatal, preweaning period would be largely independent of weight bearing. The hindlimbs of one group of pups were unloaded intermittently from postnatal day 4 to day 21: the pups were isolated from the dam for 5 h during unloading and returned for nursing for 1 h. Control pups were either maintained with the dam as normal or put on an alternating feeding schedule as described above. The enlargement of mass (approximately 3 times), increase in myonuclear number (approximately 1.6 times) and myonuclear domain (approximately 2.6 times), and transformation toward a slow fiber phenotype (from 56 to 70% fibers expressing type I myosin heavy chain) observed in controls were inhibited by hindlimb unloading. These properties were normalized to control levels or higher within 1 mo of reambulation beginning immediately after the unloading period. Therefore, chronic unloading essentially stopped the ontogenetic developmental processes of 1) net increase in DNA available for transcription, 2) increase in amount of cytoplasm sustained by that DNA pool, and 3) normal transition of myosin isoforms that occur in some fibers from birth to weaning. It is concluded that normal ontogenetic development of a postural muscle is highly dependent on the gravitational environment even during the early postnatal period, when full weight-bearing activity is not routine.

  5. Association Between Body Mass Index and Gastroesophageal Reflux Symptoms in Both Normal Weight and Overweight Women

    PubMed Central

    Jacobson, Brian C.; Somers, Samuel C.; Fuchs, Charles S.; Kelly, Ciarán P.; Camargo, Carlos A.

    2009-01-01

    Background Overweight and obese individuals are at increased risk for gastroesophageal reflux disease (GERD). An association between body mass index (BMI) and GERD symptoms among normal weight individuals has not been demonstrated. Methods In 2000, a supplemental questionnaire was used to determine the frequency, severity, and duration of GERD symptoms among randomly-selected participants of the Nurses’ Health Study. After categorizing women by BMI as measured in 1998, we used logistic regression models to study the association between BMI and GERD symptoms. Results Among 10,545 women who completed the questionnaire (86% response rate), 2,310 (22%) reported experiencing symptoms at least once a week (55% of whom described their symptoms as moderate in severity). We observed a dose-dependent relationship between increasing BMI and frequent reflux symptoms (multivariate P for trend <0.001). Compared to women with BMI 20–22.49 kg/m2, the multivariate odds ratios (ORs) were 1.38 (95% CI 1.13–1.67) for BMI 22.5–24.9; 2.20 (95% CI 1.81–2.66) for BMI 25–27.4; 2.43 (95% CI 1.96–3.01) for BMI 27.5–29.9; 2.92 (95% CI 2.35–3.62) for BMI 30–34.9, 2.93 (95% CI 2.24–3.85) for BMI ≥35, and 0.67 (95% CI 0.48–0.93) for BMI <20. Even among women with normal baseline BMI, weight gain between 1984 and 1998 was associated with increased risk of frequent reflux symptoms (OR 2.8 (95% CI 1.63–4.82) for BMI increase >3.5). Conclusion BMI is associated with GERD symptoms in both normal weight and overweight individuals. Our findings suggest that even modest weight gain among normal weight individuals may cause or exacerbate reflux symptoms. PMID:16738270

  6. β2 -microglobulin normalization within 6 months of ibrutinib-based treatment is associated with superior progression-free survival in patients with chronic lymphocytic leukemia.

    PubMed

    Thompson, Philip A; O'Brien, Susan M; Xiao, Lianchun; Wang, Xuemei; Burger, Jan A; Jain, Nitin; Ferrajoli, Alessandra; Estrov, Zeev; Keating, Michael J; Wierda, William G

    2016-02-15

    A high pretreatment β2 -microglobulin (B2M) level is associated with inferior survival outcomes in patients with chronic lymphocytic leukemia. However, to the authors' knowledge, the prognostic and predictive significance of changes in B2M during treatment have not been reported to date. The authors analyzed 83 patients treated with ibrutinib-based regimens (66 with recurrent/refractory disease) and 198 treatment-naive patients who were treated with combined fludarabine, cyclophosphamide, and rituximab (FCR) to characterize changes in B2M and their relationship with clinical outcomes. B2M rapidly decreased during treatment with ibrutinib; on multivariable analysis, patients who received FCR (odds ratio, 0.40; 95% confidence interval [95% CI], 0.18-0.90 [P = .027]) were less likely to have normalized B2M at 6 months than patients treated with ibrutinib. On univariable analysis, normalization of B2M was associated with superior progression-free survival (PFS) from the 6-month landmark in patients treated with ibrutinib-based regimens and FCR. On multivariable analysis, failure to achieve normalized B2M at 6 months of treatment was associated with inferior PFS (hazard ratio, 16.9; 95% CI, 1.3-220.0 [P = .031]) for patients treated with ibrutinib, after adjusting for the effects of baseline B2M, stage of disease, fludarabine-refractory disease, and del(17p). In contrast, in patients treated with FCR, negative minimal residual disease status in the bone marrow was the only variable found to be significantly associated with superior PFS (hazard ratio, 0.28; 95% CI, 0.12-0.67 [P = .004]). Normalization of B2M at 6 months in patients treated with ibrutinib was found to be a useful predictor of subsequent PFS and may assist in clinical decision-making. © 2015 American Cancer Society.

  7. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  8. Initial Principal Readiness to Interconnect Positive Behavioral Interventions and Supports and School Mental Health: A Sequential Multivariate Exploratory Analysis

    ERIC Educational Resources Information Center

    Ecker, Andrew Joseph

    2017-01-01

    Approximately 20% of youth in the U.S. are experiencing a mental health challenge; a rate that is said to increase by more than 50% by 2020. Schools are the largest provider of mental health services to youth, yet two of schools' most efficacious evidence-based systems, Positive Behavioral Interventions and Supports (PBIS) and school mental health…

  9. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  10. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The simultaneous integration of many trajectories using nilpotent normal forms

    NASA Technical Reports Server (NTRS)

    Grayson, Matthew A.; Grossman, Robert

    1990-01-01

    Taylor's formula shows how to approximate a certain class of functions by polynomials. The approximations are arbitrarily good in some neighborhood whenever the function is analytic and they are easy to compute. The main goal is to give an efficient algorithm to approximate a neighborhood of the configuration space of a dynamical system by a nilpotent, explicitly integrable dynamical system. The major areas covered include: an approximating map; the generalized Baker-Campbell-Hausdorff formula; the Picard-Taylor method; the main theorem; simultaneous integration of trajectories; and examples.

  12. Standardized measures of lobular involution and subsequent breast cancer risk among women with benign breast disease: a nested case-control study.

    PubMed

    Figueroa, Jonine D; Pfeiffer, Ruth M; Brinton, Louise A; Palakal, Maya M; Degnim, Amy C; Radisky, Derek; Hartmann, Lynn C; Frost, Marlene H; Stallings Mann, Melody L; Papathomas, Daphne; Gierach, Gretchen L; Hewitt, Stephen M; Duggan, Maire A; Visscher, Daniel; Sherman, Mark E

    2016-08-01

    Lesser degrees of terminal duct-lobular unit (TDLU) involution predict higher breast cancer risk; however, standardized measures to quantitate levels of TDLU involution have only recently been developed. We assessed whether three standardized measures of TDLU involution, with high intra/inter pathologist reproducibility in normal breast tissue, predict subsequent breast cancer risk among women in the Mayo benign breast disease (BBD) cohort. We performed a masked evaluation of biopsies from 99 women with BBD who subsequently developed breast cancer (cases) after a median of 16.9 years and 145 age-matched controls. We assessed three metrics inversely related to TDLU involution: TDLU count/mm(2), median TDLU span (microns, which approximates acini content), and median category of acini counts/TDLU (0-10; 11-20; 21-30; 31-50; >50). Associations with subsequent breast cancer risk for quartiles (or categories of acini counts) of each of these measures were assessed with multivariable conditional logistic regression to estimate odds ratios (ORs) and 95 % confidence intervals (CI). In multivariable models, women in the highest quartile compared to the lowest quartiles of TDLU counts and TDLU span measures were significantly associated with subsequent breast cancer diagnoses; TDLU counts quartile4 versus quartile1, OR = 2.44, 95 %CI 0.96-6.19, p-trend = 0.02; and TDLU spans, quartile4 versus quartile1, OR = 2.83, 95 %CI = 1.13-7.06, p-trend = 0.03. Significant associations with categorical measures of acini counts/TDLU were also observed: compared to women with median category of <10 acini/TDLU, women with >25 acini counts/TDLU were at significantly higher risk, OR = 3.40, 95 %CI 1.03-11.17, p-trend = 0.032. Women with TDLU spans and TDLU count measures above the median were at further increased risk, OR = 3.75 (95 %CI 1.40-10.00, p-trend = 0.008), compared with women below the median for both of these metrics. Similar results were observed for combinatorial metrics of TDLU acini counts/TDLU, and TDLU count. Standardized quantitative measures of TDLU counts and acini counts approximated by TDLU span measures or visually assessed in categories are independently associated with breast cancer risk. Visual assessment of TDLU numbers and acini content, which are highly reproducible between pathologists, could help identify women at high risk for subsequent breast cancer among the million women diagnosed annually with BBD in the US.

  13. Standardized measures of lobular involution and subsequent breast cancer risk among women with benign breast disease: a nested case–control study

    PubMed Central

    Figueroa, Jonine D.; Pfeiffer, Ruth M.; Brinton, Louise A.; Palakal, Maya M.; Degnim, Amy C.; Radisky, Derek; Hartmann, Lynn C.; Frost, Marlene H.; Mann, Melody L. Stallings; Papathomas, Daphne; Gierach, Gretchen L.; Hewitt, Stephen M.; Duggan, Maire A.; Visscher, Daniel; Sherman, Mark E.

    2016-01-01

    Lesser degrees of terminal duct-lobular unit (TDLU) involution predict higher breast cancer risk; however, standardized measures to quantitate levels of TDLU involution have only recently been developed. We assessed whether three standardized measures of TDLU involution, with high intra/inter pathologist reproducibility in normal breast tissue, predict subsequent breast cancer risk among women in the Mayo benign breast disease (BBD) cohort. We performed a masked evaluation of biopsies from 99 women with BBD who subsequently developed breast cancer (cases) after a median of 16.9 years and 145 age-matched controls. We assessed three metrics inversely related to TDLU involution: TDLU count/mm2, median TDLU span (microns, which approximates acini content), and median category of acini counts/TDLU (0–10; 11–20; 21–30; 31–50; >50). Associations with subsequent breast cancer risk for quartiles (or categories of acini counts) of each of these measures were assessed with multivariable conditional logistic regression to estimate odds ratios (ORs) and 95 % confidence intervals (CI). In multivariable models, women in the highest quartile compared to the lowest quartiles of TDLU counts and TDLU span measures were significantly associated with subsequent breast cancer diagnoses; TDLU counts quartile4 versus quartile1, OR = 2.44, 95 %CI 0.96–6.19, p-trend = 0.02; and TDLU spans, quartile4 versus quartile1, OR = 2.83, 95 %CI = 1.13–7.06, p-trend = 0.03. Significant associations with categorical measures of acini counts/TDLU were also observed: compared to women with median category of <10 acini/TDLU, women with >25 acini counts/TDLU were at significantly higher risk, OR = 3.40, 95 %CI 1.03–11.17, p-trend = 0.032. Women with TDLU spans and TDLU count measures above the median were at further increased risk, OR = 3.75 (95 %CI 1.40–10.00, p-trend = 0.008), compared with women below the median for both of these metrics. Similar results were observed for combinatorial metrics of TDLU acini counts/TDLU, and TDLU count. Standardized quantitative measures of TDLU counts and acini counts approximated by TDLU span measures or visually assessed in categories are independently associated with breast cancer risk. Visual assessment of TDLU numbers and acini content, which are highly reproducible between pathologists, could help identify women at high risk for subsequent breast cancer among the million women diagnosed annually with BBD in the US. PMID:27488681

  14. Simulating Univariate and Multivariate Burr Type IIII and Type XII Distributions through the Method of L-Moments

    ERIC Educational Resources Information Center

    Pant, Mohan Dev

    2011-01-01

    The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…

  15. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  16. Noncentral Chi-Square versus Normal Distributions in Describing the Likelihood Ratio Statistic: The Univariate Case and Its Multivariate Implication

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai

    2008-01-01

    In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…

  17. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  18. A prospective cohort study on radiation-induced hypothyroidism: development of an NTCP model.

    PubMed

    Boomsma, Marjolein J; Bijl, Hendrik P; Christianen, Miranda E M C; Beetz, Ivo; Chouvalova, Olga; Steenbakkers, Roel J H M; van der Laan, Bernard F A M; Wolffenbuttel, Bruce H R; Oosting, Sjoukje F; Schilstra, Cornelis; Langendijk, Johannes A

    2012-11-01

    To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism. The thyroid-stimulating hormone (TSH) level of 105 patients treated with (chemo-) radiation therapy for head-and-neck cancer was prospectively measured during a median follow-up of 2.5 years. Hypothyroidism was defined as elevated serum TSH with decreased or normal free thyroxin (T4). A multivariate logistic regression model with bootstrapping was used to determine the most important prognostic variables for radiation-induced hypothyroidism. Thirty-five patients (33%) developed primary hypothyroidism within 2 years after radiation therapy. An NTCP model based on 2 variables, including the mean thyroid gland dose and the thyroid gland volume, was most predictive for radiation-induced hypothyroidism. NTCP values increased with higher mean thyroid gland dose (odds ratio [OR]: 1.064/Gy) and decreased with higher thyroid gland volume (OR: 0.826/cm(3)). Model performance was good with an area under the curve (AUC) of 0.85. This is the first prospective study resulting in an NTCP model for radiation-induced hypothyroidism. The probability of hypothyroidism rises with increasing dose to the thyroid gland, whereas it reduces with increasing thyroid gland volume. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  20. Combination of near infrared spectroscopy and chemometrics for authentication of taro flour from wheat and sago flour

    NASA Astrophysics Data System (ADS)

    Rachmawati; Rohaeti, E.; Rafi, M.

    2017-05-01

    Taro flour on the market is usually sold at higher price than wheat and sago flour. This situation could be a cause for adulteration of taro flour from wheat and sago flour. For this reason, we will need an identification and authentication. Combination of near infrared (NIR) spectrum with multivariate analysis was used in this study to identify and authenticate taro flour from wheat and sago flour. The authentication model of taro flour was developed by using a mixture of 5%, 25%, and 50% of adulterated taro flour from wheat and sago flour. Before subjected to multivariate analysis, an initial preprocessing signal was used namely normalization and standard normal variate to the NIR spectrum. We used principal component analysis followed by discriminant analysis to make an identification and authentication model of taro flour. From the result obtained, about 90.48% of the taro flour mixed with wheat flour and 85% of taro flour mixed with sago flour were successfully classified into their groups. So the combination of NIR spectrum with chemometrics could be used for identification and authentication of taro flour from wheat and sago flour.

  1. The impact of maternal body mass index on external cephalic version success.

    PubMed

    Chaudhary, Shahrukh; Contag, Stephen; Yao, Ruofan

    2018-01-21

    The purpose of this study is to determine the association between body mass index (BMI) and success of ECV. This is a cross-sectional analysis of singleton live births in the USA from 2010 to 2014 using birth certificate data. Patients were assigned a BMI category according to standard WHO classification. Comparisons of success of ECV between the BMI categories were made using chi-square analysis with normal BMI as the reference group. Cochran-Armitage test was performed to look for a trend of decreasing success of ECV as BMI increased. The odds for successful ECV were estimated using multivariate logistic regression analysis, adjusting for possible confounders. A total of 51,002 patients with documented ECV were available for analysis. There was a decreased success rate for ECV as BMI increased (p < .01). Women with a BMI of 40 kg/m 2 or greater had a 58.5% success rate of ECV; women with a normal BMI had 65.0% success rate of ECV. Multivariate analyses demonstrated significant decrease in success of ECV in women with BMI of 40 kg/m 2 or greater (OR 0.621, CI 0.542-0.712). Among women with BMI of 40 kg/m 2 or greater with successful ECV, 59.5% delivered vaginally. In contrast, 81.0% of women with normal BMI and successful ECV delivered vaginally. Morbidly obese women have decreased success rate of ECV as BMI increases and decreased vaginal delivery rates after successful ECV.

  2. Discrimination and prediction of the origin of Chinese and Korean soybeans using Fourier transform infrared spectrometry (FT-IR) with multivariate statistical analysis

    PubMed Central

    Lee, Byeong-Ju; Zhou, Yaoyao; Lee, Jae Soung; Shin, Byeung Kon; Seo, Jeong-Ah; Lee, Doyup; Kim, Young-Suk

    2018-01-01

    The ability to determine the origin of soybeans is an important issue following the inclusion of this information in the labeling of agricultural food products becoming mandatory in South Korea in 2017. This study was carried out to construct a prediction model for discriminating Chinese and Korean soybeans using Fourier-transform infrared (FT-IR) spectroscopy and multivariate statistical analysis. The optimal prediction models for discriminating soybean samples were obtained by selecting appropriate scaling methods, normalization methods, variable influence on projection (VIP) cutoff values, and wave-number regions. The factors for constructing the optimal partial-least-squares regression (PLSR) prediction model were using second derivatives, vector normalization, unit variance scaling, and the 4000–400 cm–1 region (excluding water vapor and carbon dioxide). The PLSR model for discriminating Chinese and Korean soybean samples had the best predictability when a VIP cutoff value was not applied. When Chinese soybean samples were identified, a PLSR model that has the lowest root-mean-square error of the prediction value was obtained using a VIP cutoff value of 1.5. The optimal PLSR prediction model for discriminating Korean soybean samples was also obtained using a VIP cutoff value of 1.5. This is the first study that has combined FT-IR spectroscopy with normalization methods, VIP cutoff values, and selected wave-number regions for discriminating Chinese and Korean soybeans. PMID:29689113

  3. A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores

    PubMed Central

    Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn

    2013-01-01

    Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059

  4. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  5. A simple prognostic model for overall survival in metastatic renal cell carcinoma.

    PubMed

    Assi, Hazem I; Patenaude, Francois; Toumishey, Ethan; Ross, Laura; Abdelsalam, Mahmoud; Reiman, Tony

    2016-01-01

    The primary purpose of this study was to develop a simpler prognostic model to predict overall survival for patients treated for metastatic renal cell carcinoma (mRCC) by examining variables shown in the literature to be associated with survival. We conducted a retrospective analysis of patients treated for mRCC at two Canadian centres. All patients who started first-line treatment were included in the analysis. A multivariate Cox proportional hazards regression model was constructed using a stepwise procedure. Patients were assigned to risk groups depending on how many of the three risk factors from the final multivariate model they had. There were three risk factors in the final multivariate model: hemoglobin, prior nephrectomy, and time from diagnosis to treatment. Patients in the high-risk group (two or three risk factors) had a median survival of 5.9 months, while those in the intermediate-risk group (one risk factor) had a median survival of 16.2 months, and those in the low-risk group (no risk factors) had a median survival of 50.6 months. In multivariate analysis, shorter survival times were associated with hemoglobin below the lower limit of normal, absence of prior nephrectomy, and initiation of treatment within one year of diagnosis.

  6. A simple prognostic model for overall survival in metastatic renal cell carcinoma

    PubMed Central

    Assi, Hazem I.; Patenaude, Francois; Toumishey, Ethan; Ross, Laura; Abdelsalam, Mahmoud; Reiman, Tony

    2016-01-01

    Introduction: The primary purpose of this study was to develop a simpler prognostic model to predict overall survival for patients treated for metastatic renal cell carcinoma (mRCC) by examining variables shown in the literature to be associated with survival. Methods: We conducted a retrospective analysis of patients treated for mRCC at two Canadian centres. All patients who started first-line treatment were included in the analysis. A multivariate Cox proportional hazards regression model was constructed using a stepwise procedure. Patients were assigned to risk groups depending on how many of the three risk factors from the final multivariate model they had. Results: There were three risk factors in the final multivariate model: hemoglobin, prior nephrectomy, and time from diagnosis to treatment. Patients in the high-risk group (two or three risk factors) had a median survival of 5.9 months, while those in the intermediate-risk group (one risk factor) had a median survival of 16.2 months, and those in the low-risk group (no risk factors) had a median survival of 50.6 months. Conclusions: In multivariate analysis, shorter survival times were associated with hemoglobin below the lower limit of normal, absence of prior nephrectomy, and initiation of treatment within one year of diagnosis. PMID:27217858

  7. THE REASONING METHODS AND REASONING ABILITY IN NORMAL AND MENTALLY RETARDED GIRLS AND THE REASONING ABILITY OF NORMAL AND MENTALLY RETARDED BOYS AND GIRLS.

    ERIC Educational Resources Information Center

    CAPOBIANCO, RUDOLPH J.; AND OTHERS

    A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…

  8. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  9. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  10. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  11. A limit to the X-ray luminosity of nearby normal galaxies

    NASA Technical Reports Server (NTRS)

    Worrall, D. M.; Marshall, F. E.; Boldt, E. A.

    1979-01-01

    Emission is studied at luminosities lower than those for which individual discrete sources can be studied. It is shown that normal galaxies do not appear to provide the numerous low luminosity X-ray sources which could make up the 2-60 keV diffuse background. Indeed, upper limits suggest luminosities comparable with, or a little less than, that of the galaxy. This is consistent with the fact that the average optical luminosity of the sample galaxies within approximately 20 Mpc is slightly lower than that of the galaxy. An upper limit of approximately 1% of the diffuse background from such sources is derived.

  12. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  13. The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China.

    PubMed

    Pei, Ling-Ling; Li, Qin; Wang, Zheng-Xin

    2018-03-08

    The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China's pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N )) model based on the nonlinear least square (NLS) method. The Gauss-Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N ) model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N ) and the NLS-based TNGM (1, N ) models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC), and per capita emissions of SO₂ and dust, alongside GDP per capita in China during the period 1996-2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N ) model presents greater precision when forecasting WDPC, SO₂ emissions and dust emissions per capita, compared to the traditional GM (1, N ) model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO₂ and dust reduce accordingly.

  14. Decorin and biglycan of normal and pathologic human corneas

    NASA Technical Reports Server (NTRS)

    Funderburgh, J. L.; Hevelone, N. D.; Roth, M. R.; Funderburgh, M. L.; Rodrigues, M. R.; Nirankari, V. S.; Conrad, G. W.

    1998-01-01

    PURPOSE: Corneas with scars and certain chronic pathologic conditions contain highly sulfated dermatan sulfate, but little is known of the core proteins that carry these atypical glycosaminoglycans. In this study the proteoglycan proteins attached to dermatan sulfate in normal and pathologic human corneas were examined to identify primary genes involved in the pathobiology of corneal scarring. METHODS: Proteoglycans from human corneas with chronic edema, bullous keratopathy, and keratoconus and from normal corneas were analyzed using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), quantitative immunoblotting, and immunohistology with peptide antibodies to decorin and biglycan. RESULTS: Proteoglycans from pathologic corneas exhibit increased size heterogeneity and binding of the cationic dye alcian blue compared with those in normal corneas. Decorin and biglycan extracted from normal and diseased corneas exhibited similar molecular size distribution patterns. In approximately half of the pathologic corneas, the level of biglycan was elevated an average of seven times above normal, and decorin was elevated approximately three times above normal. The increases were associated with highly charged molecular forms of decorin and biglycan, indicating modification of the proteins with dermatan sulfate chains of increased sulfation. Immunostaining of corneal sections showed an abnormal stromal localization of biglycan in pathologic corneas. CONCLUSIONS: The increased dermatan sulfate associated with chronic corneal pathologic conditions results from stromal accumulation of decorin and particularly of biglycan in the affected corneas. These proteins bear dermatan sulfate chains with increased sulfation compared with normal stromal proteoglycans.

  15. Prognosis of chronic lymphocytic leukemia from infrared spectra of lymphocytes

    NASA Astrophysics Data System (ADS)

    Schultz, Christian P.; Liu, Kan-Zhi; Johnston, James B.; Mantsch, Henry H.

    1997-06-01

    Peripheral mononuclear cells obtained from blood of normal individuals and from patients with chronic lymphocytic leukemia (CLL) were investigated by infrared spectroscopy and multivariate statistical analysis. Not only are the spectra of CLL cells different from those of normal cells, but hierarchical clustering also separated the CLL cells into a number of subclusters, based on their different DNA content, a fact which may provide a useful diagnostic tool for staging (progression of the disease) and multiple clone detection. Moreover, there is evidence for a correlation between the increased amount of DNA in the CLL cells and the in-vivo doubling time of the lymphocytes in a given patient.

  16. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    PubMed

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  17. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  18. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  19. Mode instability in one-dimensional anharmonic lattices: Variational equation approach

    NASA Astrophysics Data System (ADS)

    Yoshimura, K.

    1999-03-01

    The stability of normal mode oscillations has been studied in detail under the single-mode excitation condition for the Fermi-Pasta-Ulam-β lattice. Numerical experiments indicate that the mode stability depends strongly on k/N, where k is the wave number of the initially excited mode and N is the number of degrees of freedom in the system. It has been found that this feature does not change when N increases. We propose an average variational equation - approximate version of the variational equation - as a theoretical tool to facilitate a linear stability analysis. It is shown that this strong k/N dependence of the mode stability can be explained from the view point of the linear stability of the relevant orbits. We introduce a low-dimensional approximation of the average variational equation, which approximately describes the time evolution of variations in four normal mode amplitudes. The linear stability analysis based on this four-mode approximation demonstrates that the parametric instability mechanism plays a crucial role in the strong k/N dependence of the mode stability.

  20. Patterns of brain structural connectivity differentiate normal weight from overweight subjects

    PubMed Central

    Gupta, Arpana; Mayer, Emeran A.; Sanmiguel, Claudia P.; Van Horn, John D.; Woodworth, Davis; Ellingson, Benjamin M.; Fling, Connor; Love, Aubrey; Tillisch, Kirsten; Labus, Jennifer S.

    2015-01-01

    Background Alterations in the hedonic component of ingestive behaviors have been implicated as a possible risk factor in the pathophysiology of overweight and obese individuals. Neuroimaging evidence from individuals with increasing body mass index suggests structural, functional, and neurochemical alterations in the extended reward network and associated networks. Aim To apply a multivariate pattern analysis to distinguish normal weight and overweight subjects based on gray and white-matter measurements. Methods Structural images (N = 120, overweight N = 63) and diffusion tensor images (DTI) (N = 60, overweight N = 30) were obtained from healthy control subjects. For the total sample the mean age for the overweight group (females = 32, males = 31) was 28.77 years (SD = 9.76) and for the normal weight group (females = 32, males = 25) was 27.13 years (SD = 9.62). Regional segmentation and parcellation of the brain images was performed using Freesurfer. Deterministic tractography was performed to measure the normalized fiber density between regions. A multivariate pattern analysis approach was used to examine whether brain measures can distinguish overweight from normal weight individuals. Results 1. White-matter classification: The classification algorithm, based on 2 signatures with 17 regional connections, achieved 97% accuracy in discriminating overweight individuals from normal weight individuals. For both brain signatures, greater connectivity as indexed by increased fiber density was observed in overweight compared to normal weight between the reward network regions and regions of the executive control, emotional arousal, and somatosensory networks. In contrast, the opposite pattern (decreased fiber density) was found between ventromedial prefrontal cortex and the anterior insula, and between thalamus and executive control network regions. 2. Gray-matter classification: The classification algorithm, based on 2 signatures with 42 morphological features, achieved 69% accuracy in discriminating overweight from normal weight. In both brain signatures regions of the reward, salience, executive control and emotional arousal networks were associated with lower morphological values in overweight individuals compared to normal weight individuals, while the opposite pattern was seen for regions of the somatosensory network. Conclusions 1. An increased BMI (i.e., overweight subjects) is associated with distinct changes in gray-matter and fiber density of the brain. 2. Classification algorithms based on white-matter connectivity involving regions of the reward and associated networks can identify specific targets for mechanistic studies and future drug development aimed at abnormal ingestive behavior and in overweight/obesity. PMID:25737959

  1. High-frequency Born synthetic seismograms based on coupled normal modes

    USGS Publications Warehouse

    Pollitz, Fred F.

    2011-01-01

    High-frequency and full waveform synthetic seismograms on a 3-D laterally heterogeneous earth model are simulated using the theory of coupled normal modes. The set of coupled integral equations that describe the 3-D response are simplified into a set of uncoupled integral equations by using the Born approximation to calculate scattered wavefields and the pure-path approximation to modulate the phase of incident and scattered wavefields. This depends upon a decomposition of the aspherical structure into smooth and rough components. The uncoupled integral equations are discretized and solved in the frequency domain, and time domain results are obtained by inverse Fourier transform. Examples show the utility of the normal mode approach to synthesize the seismic wavefields resulting from interaction with a combination of rough and smooth structural heterogeneities. This approach is applied to an ∼4 Hz shallow crustal wave propagation around the site of the San Andreas Fault Observatory at Depth (SAFOD).

  2. Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests

    PubMed Central

    Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit

    2014-01-01

    In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609

  3. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  4. Facet orientation in the thoracolumbar spine: three-dimensional anatomic and biomechanical analysis.

    PubMed

    Masharawi, Youssef; Rothschild, Bruce; Dar, Gali; Peleg, Smadar; Robinson, Dror; Been, Ella; Hershkovitz, Israel

    2004-08-15

    Thoracolumbar facet orientations were measured and analyzed. To establish a comprehensive database for facet orientation in the thoracolumbar vertebrae and to determine the normal human condition. Most studies on facet orientation have based their conclusions on two-dimensional measurements, in small samples or isolated vertebrae. The amount of normal asymmetry in facet orientation is poorly addressed. Transverse and longitudinal facet angles were measured directly from 240 human vertebral columns (males/females, blacks/whites). The specimens' osteologic material is part of the Hamann-Todd Osteological Collection housed at the Cleveland Museum of Natural History (Cleveland, OH). A total of 4,080 vertebrae (T1-L5) from the vertebral columns of individuals 20 to 80 years of age were measured, using a Microscribe three-dimensional apparatus (Immersion Co., San Jose, CA). Data were recorded directly on computer software. Statistical analysis included paired t tests and analysis of variance. RESULTS.: Facet orientation is independent of gender, age, and ethnic group. Asymmetry in facet orientation is found in the thorax. All thoracolumbar facets are positioned in an oblique plane. In the transverse plane, all facets from T1 to T11 are positioned with an anterior inclination of approximately 25 degrees to 30 degrees from the frontal plane. The facets of T12-L2 are oriented closer to the midsagittal plane of the vertebral body (mean range, 25.89 degrees-33.87 degrees), while the facets of L3-L5 are oriented away from that plane (mean range, 40.40 degrees-56.30 degrees). Facet transverse orientation at the thoracolumbar junction is highly variable (approximately 80% with approximately 101 degrees and approximately 20% with 35 degrees). All facets are oriented more vertically from T1 (approximately 150 degrees) to L5 (approximately 170 degrees). The facet sagittal orientations of the lumbar zygoapophyseal joints are not equivalent. CONCLUSIONS.: Asymmetry in facet orientation is a normal characteristic in the thorax.

  5. Improving the Performance of Two-Stage Gas Guns By Adding a Diaphragm in the Pump Tube

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, Robert J.

    1995-01-01

    Herein, we study the technique of improving the gun performance by installing a diaphragm in the pump tube of the gun. A CFD study is carried out for the 0.28 in. gun in the Hypervelocity Free Flight Radiation (HFF RAD) range at the NASA Ames Research Center. The normal, full-length pump tube is studied as well as two pump tubes of reduced length (approximately 75% and approximately 33% of the normal length). Significant improvements in performance are calculated to be gained for the reduced length pump tubes upon the addition of the diaphragm. These improvements are identified as reductions in maximum pressures in the pump tube and at the projectile base of approximately 20%, while maintaining the projectile muzzle velocity or as increases in muzzle velocity of approximately 0.5 km/sec while not increasing the maximum pressures in the gun. Also, it is found that both guns with reduced pump tube length (with diaphragms) could maintain the performance of gun with the full length pump tube without diaphragms, whereas the guns with reduced pump tube lengths without diaphragms could not. A five-shot experimental investigation of the pump tube diaphragm technique is carried out for the gun with a pump tube length of 75% normal. The CFD predictions of increased muzzle velocity are borne out by the experimental data. Modest, but useful muzzle velocity increases (2.5 - 6%) are obtained upon the installation of a diaphragm, compared to a benchmark shot without a diaphragm.

  6. Exercise-induced muscle glucose uptake in mice with graded, muscle-specific GLUT-4 deletion

    PubMed Central

    Howlett, Kirsten F; Andrikopoulos, Sofianos; Proietto, Joseph; Hargreaves, Mark

    2013-01-01

    To investigate the importance of the glucose transporter GLUT-4 for muscle glucose uptake during exercise, transgenic mice with skeletal muscle GLUT-4 expression approximately 30–60% of normal (CON) and approximately 5–10% of normal (KO) were generated using the Cre/Lox system and compared with wild-type (WT) mice during approximately 40 min of treadmill running (KO: 37.7 ± 1.3 min; WT: 40 min; CON: 40 min, P = 0.18). In WT and CON animals, exercise resulted in an overall increase in muscle glucose uptake. More specifically, glucose uptake was increased in red gastrocnemius of WT mice and in the soleus and red gastrocnemius of CON mice. In contrast, the exercise-induced increase in muscle glucose uptake in all muscles was completely abolished in KO mice. Muscle glucose uptake increased during exercise in both red and white quadriceps of WT mice, while the small increases in CON mice were not statistically significant. In KO mice, there was no change at all in quadriceps muscle glucose uptake. No differences in muscle glycogen use during exercise were observed between any of the groups. However, there was a significant increase in plasma glucose levels after exercise in KO mice. The results of this study demonstrated that a reduction in skeletal muscle GLUT-4 expression to approximately 10% of normal levels completely abolished the exercise-induced increase in muscle glucose uptake. PMID:24303141

  7. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    PubMed

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Multivariate non-normally distributed random variables in climate research - introduction to the copula approach

    NASA Astrophysics Data System (ADS)

    Schölzel, C.; Friederichs, P.

    2008-10-01

    Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.

  9. Multivariate probability distribution for sewer system vulnerability assessment under data-limited conditions.

    PubMed

    Del Giudice, G; Padulano, R; Siciliano, D

    2016-01-01

    The lack of geometrical and hydraulic information about sewer networks often excludes the adoption of in-deep modeling tools to obtain prioritization strategies for funds management. The present paper describes a novel statistical procedure for defining the prioritization scheme for preventive maintenance strategies based on a small sample of failure data collected by the Sewer Office of the Municipality of Naples (IT). Novelty issues involve, among others, considering sewer parameters as continuous statistical variables and accounting for their interdependences. After a statistical analysis of maintenance interventions, the most important available factors affecting the process are selected and their mutual correlations identified. Then, after a Box-Cox transformation of the original variables, a methodology is provided for the evaluation of a vulnerability map of the sewer network by adopting a joint multivariate normal distribution with different parameter sets. The goodness-of-fit is eventually tested for each distribution by means of a multivariate plotting position. The developed methodology is expected to assist municipal engineers in identifying critical sewers, prioritizing sewer inspections in order to fulfill rehabilitation requirements.

  10. Relationships between coping style and PAI profiles in a community sample.

    PubMed

    Deisinger, J A; Cassisi, J E; Whitaker, S L

    1996-05-01

    Relationships between coping style and psychological functioning were examined in a heterogeneous community sample (N = 168). Psychological functioning was categorized with the Personality Assessment Inventory (PAI; Morey, 1991). Subjects were assigned to PAI configural profile clusters, using T-scores from PAI clinical scales. Three PAI clusters were prominent in this sample: normal, anxious, and eccentric. Multivariate analysis of covariance revealed that these clusters differed significantly in coping style, as measured by the dispositional format of the COPE Inventory (Carver, Scheier, & Weintraub, 1989). Normals coped through avoidance significantly less than anxious or eccentric subjects. Also, normals engaged in seeking social support and venting more than eccentric but less than anxious subjects. Gender differences also were noted, with women more likely to cope by seeking social support and men more likely to cope through hedonistic escapism.

  11. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  12. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  13. Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM.

    PubMed

    Mair, Patrick; Satorra, Albert; Bentler, Peter M

    2012-07-01

    This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.

  14. Evaluation of the repeatability of a crude adult indirect Ostertagia ostertagi ELISA and methods of expressing test results.

    PubMed

    Sanchez, J; Dohoo, I R; Markham, F; Leslie, K; Conboy, G

    2002-10-16

    An indirect enzyme-linked immunosorbent assay (ELISA) for the detection of antibodies against Ostertagia ostertagi using a crude adult worm antigen was evaluated using serum and milk samples from adult cows, as well as from bulk tank milk. Within and between plate repeatabilities were determined. In addition, the effects of factors such as antigen batch, freezing, preserving of the samples and somatic cell counts (SCCs) of the samples were evaluated. Raw optical densities (ODs) and normalized values were compared using the concordance correlation coefficient (CCC), the coefficient of variation (CV), Bland-Altman plots (BA). Based on raw OD values, there was a high repeatability within a plate (CCC approximately 0.96 and CV<10%). Repeatability between plates was evaluated following normalization of OD values by four methods. Computing normalized values as (OD-Nt)/(Pst-Nt), gave the most repeatable results, with the CCC being approximately 0.95 and the CV approximately 11%. When the OD values were higher than 1.2 and 0.3 for the positive and the negative controls, respectively, none of the normalization methods evaluated provided highly repeatable results and it was necessary to repeat the test. Two batches of the crude antigen preparation were evaluated for repeatability, and no difference was found (CCC=0.96). The use of preservative (bronopol) did not affect test results, nor did freezing the samples for up to 8 months. A significant positive relationship between ELISA OD for milk samples and SCC score was found. Therefore, the use of composite milk samples, which have less variable SCC than samples taken from each quarter, would be more suitable when the udder health status is unknown. The analytical methods used to evaluate repeatability provided a practical way to select among normalization procedures.

  15. Hypobaric Control of Ethylene-Induced Leaf Senescence in Intact Plants of Phaseolus vulgaris L. 1

    PubMed Central

    Nilsen, Karl N.; Hodges, Clinton F.

    1983-01-01

    A controlled atmospheric-environment system (CAES) designed to sustain normal or hypobaric ambient growing conditions was developed, described, and evaluated for its effectiveness as a research tool capable of controlling ethylene-induced leaf senescence in intact plants of Phaseolus vulgaris L. Senescence was prematurely-induced in primary leaves by treatment with 30 parts per million ethephon. Ethephon-derived endogenous ethylene reached peak levels within 6 hours at 26°C. Total endogenous ethylene levels then temporarily stabilized at approximately 1.75 microliters per liter from 6 to 24 hours. Thereafter, a progressive rise in ethylene resulted from leaf tissue metabolism and release. Throughout the study, the endogenous ethylene content of ethephon-treated leaves was greater than that of nontreated leaves. Subjecting ethephon-treated leaves to atmospheres of 200 millibars, with O2 and CO2 compositions set to approximate normal atmospheric partial pressures, prevented chlorophyll loss. Alternately, subjecting ethephon-treated plants to 200 millibars of air only partially prevented chlorophyll loss. Hypobaric conditions (200 millibars), with O2 and CO2 at normal atmospheric availability, could be delayed until 48 hours after ethephon treatment and still prevent most leaf senescence. In conclusion, hypobaric conditions established and maintained within the CAES prevented ethylene-induced senescence (chlorosis) in intact plants, provided O2 and CO2 partial pressures were maintained at levels approximating normal ambient availability. An unexpected increase in endogenous ethylene was detected within nontreated control leaves 48 hours subsequent to relocation from winter greenhouse conditions (latitude, 42°00″ N) to the CAES operating at normal ambient pressure. The longer photoperiod and/or higher temperature utilized within the CAES are hypothesized to influence ethylene metabolism directly and growth-promotive processes (e.g. response thresholds) indirectly. PMID:16662806

  16. Assessing the Effectiveness of a School-Based Dental Clinic on the Oral Health of Children Who Lack Access to Dental Care: A Program Evaluation

    ERIC Educational Resources Information Center

    Carpino, Rachel; Walker, Mary P.; Liu, Ying; Simmer-Beck, Melanie

    2017-01-01

    This program evaluation examines the effectiveness of a school-based dental clinic. A repeated-measures design was used to longitudinally examine secondary data from participants (N = 293). Encounter intensity was developed to normalize data. Multivariate analysis of variance and Kruskal-Wallis test were used to investigate the effect of encounter…

  17. A method of using cluster analysis to study statistical dependence in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.

  18. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  19. Early Speech Production of Children with Cleft Palate.

    ERIC Educational Resources Information Center

    Estrem, Theresa; Broen, Patricia A.

    1989-01-01

    The study comparing word-initial target phonemes and phoneme production of five toddlers with cleft palate and five normal toddlers found that the cleft palate children tended to target more words with word-initial nasals, approximants, and vowels and fewer words with word-initial stops, fricatives, and affricates than normal children. (Author/DB)

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, H.; Mosmann, T.; Strober, S.

    Purified CD4+ BALB/c spleen T cells obtained 4-6 wk after total lymphoid irradiation (TLI) helped normal syngeneic B cells to produce a vigorous antibody response to TNP keyhole limpet hemocyanin in adoptive cell transfer experiments. However, the same cells failed to transfer delayed-type hypersensitivity to the adoptive hosts as measured by a foot pad swelling assay. In addition, purified CD4+ cells from TLI-treated mice were unable to induce graft vs. host disease in lethally irradiated allogeneic C57BL/Ka recipient mice. In response to mitogen stimulation, unfractionated spleen cells obtained from TLI mice secreted normal levels of IL-4 and IL-5, but markedlymore » reduced levels of IL-2 and INF-gamma. A total of 229 CD4+ clones from spleen cells of both normal and TLI-treated mice were established, and the cytokine secretion pattern from each clone was analyzed. The results demonstrate that the ratio of Th1- and Th2-like clones in the spleens of normal BALB/c mice is 1:0.6, whereas the ratio in TLI mice is approximately 1:7. These results suggest that Th2-like cells recover rapidly (at approximately 4-6 wk) after TLI treatment and account for the early return of antibody helper activity and secretion of IL-4 and IL-5, but Th1-like cells recover more slowly (in approximately 3 mo) after irradiation, and this accounts for the deficit in cell-mediated immunity and the reduced amount of IL-2 and IFN-gamma secretion.« less

  1. The effect of normalization of Partial Directed Coherence on the statistical assessment of connectivity patterns: a simulation study.

    PubMed

    Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L

    2013-01-01

    Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.

  2. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    PubMed

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability by examining how this relationship varies across children with normal hearing and hearing loss, and by examining whether this relationship is mediated by domain-general skills (working memory, short-term memory, and inhibition). © 2017 The British Psychological Society.

  3. Anal sphincter lacerations and upright delivery postures--a risk analysis from a randomized controlled trial.

    PubMed

    Altman, Daniel; Ragnar, Inga; Ekström, Asa; Tydén, Tanja; Olsson, Sven-Eric

    2007-02-01

    To evaluate obstetric sphincter lacerations after a kneeling or sitting position at second stage of labor in a multivariate risk analysis model. Two hundred and seventy-one primiparous women with normal pregnancies and spontaneous labor were randomized, 138 to a kneeling position and 133 to a sitting position. Medical data were retrieved from delivery charts and partograms. Risk factors were tested in a multivariate logistic regression model in a stepwise manner. The trial was completed by 106 subjects in the kneeling group and 112 subjects in the sitting group. There were no significant differences with regard to duration of second stage of labor or pre-trial maternal characteristics between the two groups. Obstetrical sphincter tears did not differ significantly between the two groups but an intact perineum was more common in the kneeling group (p<0.03) and episiotomy (mediolateral) was more common in the sitting group (p<0.05). Three grade IV sphincter lacerations occurred in the sitting group compared to none in the kneeling group (NS). Multivariate risk analysis indicated that prolonged duration of second stage of labor and episiotomy were associated with an increased risk of third- or fourth-degree sphincter tears (p<0.01 and p<0.05, respectively). Delivery posture, maternal age, fetal weight, use of oxytocin, and use of epidural analgesia did not increase the risk of obstetrical anal sphincter lacerations in the two upright postures. Obstetrical anal sphincter lacerations did not differ significantly between a kneeling or sitting upright delivery posture. Episiotomy was more common after a sitting delivery posture, which may be associated with an increased risk of anal sphincter lacerations. Upright delivery postures may be encouraged in healthy women with normal, full-term pregnancy.

  4. Asymptotics of bivariate generating functions with algebraic singularities

    NASA Astrophysics Data System (ADS)

    Greenwood, Torin

    Flajolet and Odlyzko (1990) derived asymptotic formulae the coefficients of a class of uni- variate generating functions with algebraic singularities. Gao and Richmond (1992) and Hwang (1996, 1998) extended these results to classes of multivariate generating functions, in both cases by reducing to the univariate case. Pemantle and Wilson (2013) outlined new multivariate ana- lytic techniques and used them to analyze the coefficients of rational generating functions. After overviewing these methods, we use them to find asymptotic formulae for the coefficients of a broad class of bivariate generating functions with algebraic singularities. Beginning with the Cauchy integral formula, we explicity deform the contour of integration so that it hugs a set of critical points. The asymptotic contribution to the integral comes from analyzing the integrand near these points, leading to explicit asymptotic formulae. Next, we use this formula to analyze an example from current research. In the following chapter, we apply multivariate analytic techniques to quan- tum walks. Bressler and Pemantle (2007) found a (d + 1)-dimensional rational generating function whose coefficients described the amplitude of a particle at a position in the integer lattice after n steps. Here, the minimal critical points form a curve on the (d + 1)-dimensional unit torus. We find asymptotic formulae for the amplitude of a particle in a given position, normalized by the number of steps n, as n approaches infinity. Each critical point contributes to the asymptotics for a specific normalized position. Using Groebner bases in Maple again, we compute the explicit locations of peak amplitudes. In a scaling window of size the square root of n near the peaks, each amplitude is asymptotic to an Airy function.

  5. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cella, Laura, E-mail: laura.cella@cnr.it; Department of Advanced Biomedical Sciences, Federico II University School of Medicine, Naples; Liuzzi, Raffaele

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under themore » receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.« less

  6. Expression of Vascular Notch Ligand Delta-Like 4 and Inflammatory Markers in Breast Cancer

    PubMed Central

    Jubb, Adrian M.; Soilleux, Elizabeth J.; Turley, Helen; Steers, Graham; Parker, Andrew; Low, Irene; Blades, Jennifer; Li, Ji-Liang; Allen, Paul; Leek, Russell; Noguera-Troise, Irene; Gatter, Kevin C.; Thurston, Gavin; Harris, Adrian L.

    2010-01-01

    Delta-like ligand 4 (Dll4) is a Notch ligand that is predominantly expressed in the endothelium. Evidence from xenografts suggests that inhibiting Dll4 may overcome resistance to antivascular endothelial growth factor therapy. The aims of this study were to characterize the expression of Dll4 in breast cancer and assess whether it is associated with inflammatory markers and prognosis. We examined 296 breast adenocarcinomas and 38 ductal carcinoma in situ tissues that were represented in tissue microarrays. Additional whole sections representing 10 breast adenocarcinomas, 10 normal breast tissues, and 16 angiosarcomas were included. Immunohistochemistry was then performed by using validated antibodies against Dll4, CD68, CD14, Dendritic Cell-Specific Intercellular adhesion molecule-3-Grabbing Non-integrin (DC-SIGN), CD123, neutrophil elastase, CD31, and carbonic anhydrase 9. Dll4 was selectively expressed by intratumoral endothelial cells in 73% to 100% of breast adenocarcinomas, 18% of in situ ductal carcinomas, and all lactating breast cases, but not normal nonlactating breast. High intensity of endothelial Dll4 expression was a statistically significant adverse prognostic factor in univariate (P = 0.002 and P = 0.01) and multivariate analyses (P = 0.03 and P = 0.04) of overall survival and relapse-free survival, respectively. Among the inflammatory markers, only CD68 and DC-SIGN were significant prognostic factors in univariate (but not multivariate) analyses of overall survival (P = 0.01 and 0.002, respectively). In summary, Dll4 was expressed by endothelium associated with breast cancer cells. In these retrospective subset analyses, endothelial Dll4 expression was a statistically significant multivariate prognostic factor. PMID:20167860

  7. Radiative heat transfer in strongly forward scattering media using the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Granate, Pedro; Coelho, Pedro J.; Roger, Maxime

    2016-03-01

    The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta-Eddington phase function and the transport approximation may perform poorly.

  8. Multivariable passive RFID vapor sensors: roll-to-roll fabrication on a flexible substrate.

    PubMed

    Potyrailo, Radislav A; Burns, Andrew; Surman, Cheryl; Lee, D J; McGinniss, Edward

    2012-06-21

    We demonstrate roll-to-roll (R2R) fabrication of highly selective, battery-free radio frequency identification (RFID) sensors on a flexible polyethylene terephthalate (PET) polymeric substrate. Selectivity of our developed RFID sensors is provided by measurements of their resonance impedance spectra, followed by the multivariate analysis of spectral features, and correlation of these spectral features to the concentrations of vapors of interest. The multivariate analysis of spectral features also provides the ability for the rejection of ambient interferences. As a demonstration of our R2R fabrication process, we employed polyetherurethane (PEUT) as a "classic" sensing material, extruded this sensing material as 25, 75, and 125-μm thick films, and thermally laminated the films onto RFID inlays, rapidly producing approximately 5000 vapor sensors. We further tested these RFID vapor sensors for their response selectivity toward several model vapors such as toluene, acetone, and ethanol as well as water vapor as an abundant interferent. Our RFID sensing concept features 16-bit resolution provided by the sensor reader, granting a highly desired independence from costly proprietary RFID memory chips with a low-resolution analog input. Future steps are being planned for field-testing of these sensors in numerous conditions.

  9. An alternative derivation of the stationary distribution of the multivariate neutral Wright-Fisher model for low mutation rates with a view to mutation rate estimation from site frequency data.

    PubMed

    Schrempf, Dominik; Hobolth, Asger

    2017-04-01

    Recently, Burden and Tang (2016) provided an analytical expression for the stationary distribution of the multivariate neutral Wright-Fisher model with low mutation rates. In this paper we present a simple, alternative derivation that illustrates the approximation. Our proof is based on the discrete multivariate boundary mutation model which has three key ingredients. First, the decoupled Moran model is used to describe genetic drift. Second, low mutation rates are assumed by limiting mutations to monomorphic states. Third, the mutation rate matrix is separated into a time-reversible part and a flux part, as suggested by Burden and Tang (2016). An application of our result to data from several great apes reveals that the assumption of stationarity may be inadequate or that other evolutionary forces like selection or biased gene conversion are acting. Furthermore we find that the model with a reversible mutation rate matrix provides a reasonably good fit to the data compared to the one with a non-reversible mutation rate matrix. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Approximation of a Brittle Fracture Energy with a Constraint of Non-interpenetration

    NASA Astrophysics Data System (ADS)

    Chambolle, Antonin; Conti, Sergio; Francfort, Gilles A.

    2018-06-01

    Linear fracture mechanics (or at least the initiation part of that theory) can be framed in a variational context as a minimization problem over an SBD type space. The corresponding functional can in turn be approximated in the sense of {Γ}-convergence by a sequence of functionals involving a phase field as well as the displacement field. We show that a similar approximation persists if additionally imposing a non-interpenetration constraint in the minimization, namely that only nonnegative normal jumps should be permissible.

  11. The application of the piecewise linear approximation to the spectral neighborhood of soil line for the analysis of the quality of normalization of remote sensing materials

    NASA Astrophysics Data System (ADS)

    Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.

    2017-04-01

    The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).

  12. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  13. Experimental variability and data pre-processing as factors affecting the discrimination power of some chemometric approaches (PCA, CA and a new algorithm based on linear regression) applied to (+/-)ESI/MS and RPLC/UV data: Application on green tea extracts.

    PubMed

    Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A

    2016-08-01

    The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data basis for multivariate analysis methods, equivalent to data resulting from chromatographic separations. The alternative evaluation of very large data series based on linear regression analysis produced information equivalent to results obtained through application of PCA an CA. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Tobacco use in popular movies during the past decade

    PubMed Central

    Mekemson, C; Glik, D; Titus, K; Myerson, A; Shaivitz, A; Ang, A; Mitchell, S

    2004-01-01

    Objective: The top 50 commercially successful films released per year from 1991 to 2000 were content coded to assess trends in tobacco use over time and attributes of films predictive of higher smoking rates. Design: This observational study used media content analysis methods to generate data about tobacco use depictions in films studied (n = 497). Films are the basic unit of analysis. Once films were coded and preliminary analysis completed, outcome data were transformed to approximate multivariate normality before being analysed with general linear models and longitudinal mixed method regression methods. Main outcome measures: Tobacco use per minute of film was the main outcome measure used. Predictor variables include attributes of films and actors. Tobacco use was defined as any cigarette, cigar, and chewing tobacco use as well as the display of smoke and cigarette paraphernalia such as ashtrays, brand names, or logos within frames of films reviewed. Results: Smoking rates in the top films fluctuated yearly over the decade with an overall modest downward trend (p < 0.005), with the exception of R rated films where rates went up. Conclusions: The decrease in smoking rates found in films in the past decade is modest given extensive efforts to educate the entertainment industry on this issue over the past decade. Monitoring, education, advocacy, and policy change to bring tobacco depiction rates down further should continue. PMID:15564625

  15. Combining Mixture Components for Clustering*

    PubMed Central

    Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël

    2010-01-01

    Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302

  16. Impacts of emerging contaminants on surrounding aquatic environment from a youth festival.

    PubMed

    Jiang, Jheng-Jie; Lee, Chon-Lin; Fang, Meng-Der; Tu, Bo-Wen; Liang, Yu-Jen

    2015-01-20

    The youth festival as we refer to Spring Scream, a large-scale pop music festival, is notorious for the problems of drug abuse and addiction. The origin, temporal magnitudes, potential risks and mass inputs of emerging contaminants (ECs) were investigated. Thirty targeted ECs were analyzed by solid-phase extraction and liquid chromatography coupled to tandem mass spectrometry (SPE-LC-MS/MS). Sampling strategy was designed to characterize EC behavior in different stages (before and after the youth festival), based on multivariate data analysis to explore the contributions of contaminants from normal condition to the youth festival. Wastewater influents and effluents were collected during the youth festival (approximately 600 000 pop music fans and youth participated). Surrounding river waters are also sampled to illustrate the touristic impacts during peak season and off-season. Seasonal variations were observed, with the highest concentrations in April (Spring Scream) and the lowest in October (off-season). Acetaminophen, diclofenac, codeine, ampicillin, tetracycline, erythromycin-H2O, and gemfibrozil have significant pollution risk quotients (RQs > 1), indicating ecotoxicological concerns. Principal component analysis (PCA) and weekly patterns provide a perspective in assessing the touristic impacts and address the dramatic changes in visitor population and drug consumption. The highest mass loads discharged into the aquatic ecosystem corresponded to illicit drugs/controlled substances such as ketamine and MDMA, indicating the high consumption of ecstasy during Spring Scream.

  17. Univariate and multivariate methods for chemical mapping of cervical cancer cells

    NASA Astrophysics Data System (ADS)

    Duraipandian, Shiyamala; Zheng, Wei; Huang, Zhiwei

    2012-01-01

    Visualization of cells and subcellular organelles are currently carried out using available microscopy methods such as cryoelectron microscopy, and fluorescence microscopy. These methods require external labeling using fluorescent dyes and extensive sample preparations to access the subcellular structures. However, Raman micro-spectroscopy provides a non-invasive, label-free method for imaging the cells with chemical specificity at sub-micrometer spatial resolutions. The scope of this paper is to image the biochemical/molecular distributions in cells associated with cancerous changes. Raman map data sets were acquired from the human cervical carcinoma cell lines (HeLa) after fixation under 785 nm excitation wavelength. The individual spectrum was recorded by raster-scanning the laser beam over the sample with 1μm step size and 10s exposure time. Images revealing nucleic acids, lipids and proteins (phenylalanine, amide I) were reconstructed using univariate methods. In near future, the small pixel to pixel variations will also be imaged using different multivariate methods (PCA, clustering (HCA, K-means, FCM)) to determine the main cellular constitutions. The hyper-spectral image of cell was reconstructed utilizing the spectral contrast at different pixels of the cell (due to the variation in the biochemical distribution) without using fluorescent dyes. Normal cervical squamous cells will also be imaged in order to differentiate normal and cancer cells of cervix using the biochemical changes in different grades of cancer. Based on the information obtained from the pseudo-color maps, constructed from the hyper-spectral cubes, the primary cellular constituents of normal and cervical cancer cells were identified.

  18. The natural mathematics of behavior analysis.

    PubMed

    Li, Don; Hautus, Michael J; Elliffe, Douglas

    2018-04-19

    Models that generate event records have very general scope regarding the dimensions of the target behavior that we measure. From a set of predicted event records, we can generate predictions for any dependent variable that we could compute from the event records of our subjects. In this sense, models that generate event records permit us a freely multivariate analysis. To explore this proposition, we conducted a multivariate examination of Catania's Operant Reserve on single VI schedules in transition using a Markov Chain Monte Carlo scheme for Approximate Bayesian Computation. Although we found systematic deviations between our implementation of Catania's Operant Reserve and our observed data (e.g., mismatches in the shape of the interresponse time distributions), the general approach that we have demonstrated represents an avenue for modelling behavior that transcends the typical constraints of algebraic models. © 2018 Society for the Experimental Analysis of Behavior.

  19. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  20. A new approach in space-time analysis of multivariate hydrological data: Application to Brazil's Nordeste region rainfall

    NASA Astrophysics Data System (ADS)

    Sicard, Emeline; Sabatier, Robert; Niel, HéLèNe; Cadier, Eric

    2002-12-01

    The objective of this paper is to implement an original method for spatial and multivariate data, combining a method of three-way array analysis (STATIS) with geostatistical tools. The variables of interest are the monthly amounts of rainfall in the Nordeste region of Brazil, recorded from 1937 to 1975. The principle of the technique is the calculation of a linear combination of the initial variables, containing a large part of the initial variability and taking into account the spatial dependencies. It is a promising method that is able to analyze triple variability: spatial, seasonal, and interannual. In our case, the first component obtained discriminates a group of rain gauges, corresponding approximately to the Agreste, from all the others. The monthly variables of July and August strongly influence this separation. Furthermore, an annual study brings out the stability of the spatial structure of components calculated for each year.

  1. Classification of adulterated honeys by multivariate analysis.

    PubMed

    Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad

    2017-06-01

    In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Raman spectrum character of skin tumor induced by UVB

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Hu, Liangjun; Wang, Yunxia; Li, Yongzeng

    2016-03-01

    In our study, the skin canceration processes induced by UVB were analyzed from the perspective of tissue spectrum. A home-made Raman spectral system with a millimeter order excitation laser spot size combined with a multivariate statistical analysis for monitoring the skin changed irradiated by UVB was studied and the discrimination were evaluated. Raman scattering signals of the SCC and normal skin were acquired. Spectral differences in Raman spectra were revealed. Linear discriminant analysis (LDA) based on principal component analysis (PCA) were employed to generate diagnostic algorithms for the classification of skin SCC and normal. The results indicated that Raman spectroscopy combined with PCA-LDA demonstrated good potential for improving the diagnosis of skin cancers.

  3. LAM-1 and FAT Genes Control Development of the Leaf Blade in Nicotiana sylvestris.

    PubMed Central

    McHale, NA

    1993-01-01

    Leaf primordia of the lam-1 mutant of Nicotiana sylvestris grow normally in length but remain bladeless throughout development. The blade initiation site is established at the normal time and position in lam-1 primordia. Anticlinal divisions proceed normally in the outer L1 and L2 layers, but the inner L3 cells fail to establish the periclinal divisions that normally generate the middle mesophyll core. The lam-1 mutation also blocks formation of blade mesophyll from distal L2 cells. This suggests that LAM-1 controls a common step in initiation of blade tissue from the L2 and L3 lineage of the primordium. Another recessive mutation (fat) was isolated in N. sylvestris that induces abnormal periclinal divisions in the mesophyll during blade initiation and expansion. This generates a blade approximately twice its normal thickness by doubling the number of mesophyll cell layers from four to approximately eight. Presumably, the fat mutation defines a negative regulator involved in repression of periclinal divisions in the blade. The lam-1 fat double mutant shows radial proliferation of mesophyll cells at the blade initiation site. This produces a highly disorganized, club-shaped blade that appears to represent an additive effect of the lam-1 and fat mutations on blade founder cells. PMID:12271096

  4. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  5. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  6. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  7. The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China

    PubMed Central

    Pei, Ling-Ling; Li, Qin

    2018-01-01

    The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China’s pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N)) model based on the nonlinear least square (NLS) method. The Gauss–Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N) model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N) and the NLS-based TNGM (1, N) models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC), and per capita emissions of SO2 and dust, alongside GDP per capita in China during the period 1996–2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N) model presents greater precision when forecasting WDPC, SO2 emissions and dust emissions per capita, compared to the traditional GM (1, N) model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO2 and dust reduce accordingly. PMID:29517985

  8. Nontargeted, Rapid Screening of Extra Virgin Olive Oil Products for Authenticity Using Near-Infrared Spectroscopy in Combination with Conformity Index and Multivariate Statistical Analyses.

    PubMed

    Karunathilaka, Sanjeewa R; Kia, Ali-Reza Fardin; Srigley, Cynthia; Chung, Jin Kyu; Mossoba, Magdi M

    2016-10-01

    A rapid tool for evaluating authenticity was developed and applied to the screening of extra virgin olive oil (EVOO) retail products by using Fourier-transform near infrared (FT-NIR) spectroscopy in combination with univariate and multivariate data analysis methods. Using disposable glass tubes, spectra for 62 reference EVOO, 10 edible oil adulterants, 20 blends consisting of EVOO spiked with adulterants, 88 retail EVOO products and other test samples were rapidly measured in the transmission mode without any sample preparation. The univariate conformity index (CI) and the multivariate supervised soft independent modeling of class analogy (SIMCA) classification tool were used to analyze the various olive oil products which were tested for authenticity against a library of reference EVOO. Better discrimination between the authentic EVOO and some commercial EVOO products was observed with SIMCA than with CI analysis. Approximately 61% of all EVOO commercial products were flagged by SIMCA analysis, suggesting that further analysis be performed to identify quality issues and/or potential adulterants. Due to its simplicity and speed, FT-NIR spectroscopy in combination with multivariate data analysis can be used as a complementary tool to conventional official methods of analysis to rapidly flag EVOO products that may not belong to the class of authentic EVOO. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  9. Population level determinants of acute mountain sickness among young men: a retrospective study.

    PubMed

    Li, Xiaoxiao; Tao, Fasheng; Pei, Tao; You, Haiyan; Liu, Yan; Gao, Yuqi

    2011-09-28

    Many visitors, including military troops, who enter highland regions from low altitude areas may suffer from acute mountain sickness (AMS), which negatively impacts workable man-hours and increases healthcare costs. The aim of this study was to evaluate the population level risk factors and build a multivariate model, which might be applicable to reduce the effects of AMS on Chinese young men traveling to this region. Chinese highland military medical records were used to obtain data of young men (n = 3727) who entered the Tibet plateau between the years of 2006-2009. The relationship between AMS and travel profile, demographic characteristics, and health behaviors were evaluated by logistic regression. Univariate logistic models estimated the crude odds ratio. The variables that showed significance in the univariate model were included in a multivariate model to derive adjusted odds ratios and build the final model. Data corresponding to odd and even years (2 subsets) were analyzed separately and used in a simple cross-validation. Univariate analysis indicated that travel profile, prophylactic use, ethnicity, and province of birth were all associated with AMS in both subsets. In multivariate analysis, young men who traveled from lower altitude (600-800 m vs. 1300-1500 m, adjusted odds ratio (AOR) = 1.32-1.44) to higher altitudes (4100-4300 m vs. 2900-3100 m, AOR = 3.94-4.12; 3600-3700 m vs. 2900-3100 m, AOR = 2.71-2.74) by air or rapid land transport for emergency mission deployment (emergency land deployment vs. normal land deployment, AOR = 2.08-2.11; normal air deployment vs. normal land deployment, AOR = 2.00-2.20; emergency air deployment vs. normal land deployment, AOR = 2.40-3.34) during the cold season (cold vs. warm, AOR = 1.25-1.28) are at great risk for developing AMS. Non-Tibetan male soldiers (Tibetan vs. Han, AOR = 0.03-0.08), born and raised in lower provinces (eastern vs. northwestern, AOR = 1.32-1.39), and deployed without prophylaxis (prophylactic drug vs. none, AOR = 0.75-0.76), also represented a population at significantly increased risk for AMS. The predicted model was built; the area under receiver operating characteristic curve was 0.703. Before a group of young men first enter a high altitude area, it is important that a health service plan should be made referring to the group's travel profile and with respect to young men's ethnicity and province of birth. Low-cost Chinese traditional prophylactic drugs might have some effect on decreasing the risk of AMS, although this needs further verification.

  10. Associations between body condition, rumen fill, diarrhoea and lameness and ruminal acidosis in Australian dairy herds.

    PubMed

    Bramley, E; Costa, N D; Fulkerson, W J; Lean, I J

    2013-11-01

    To investigate associations between ruminal acidosis and body condition score (BCS), prevalence of poor rumen fill, diarrhoea and lameness in dairy cows in New South Wales and Victoria, Australia. This was a cross-sectional study conducted in 100 dairy herds in five regions of Australia. Feeding practices, diets and management practices of herds were assessed. Lactating cows within herds were sampled for rumen biochemistry (n = 8 per herd) and scored for body condition, rumen fill and locomotion (n = 15 per herd). The consistency of faecal pats (n = 20 per herd) from the lactating herd was also scored. A perineal faecal staining score was given to each herd. Herds were classified as subclinically acidotic (ACID), suboptimal (SO) and non-acidotic (Normal) when ≥3/8 cows per herd were allocated to previously defined categories based on rumen biochemical measures. Multivariate logistic regression models were used to examine associations between the prevalence of conditions within a herd and explanatory variables. Median BCS and perineal staining score were not associated with herd category (p >0.05). In the multivariate models, herds with a high prevalence of low rumen fill scores (≤2/5) were more likely to be categorised Normal than SO with an associated increased risk of 69% (p = 0.05). Herds that had a greater prevalence of lame cows (locomotion scores ≥3/5), had 103% higher risk of being categorised as ACID than SO (p = 0.034). In a multivariate logistic regression model, with herd modelled as a random effect, an increase of 1% of pasture in the diet was associated with a 5.5% increase in risk of high faecal scores (≥4/5) indicating diarrhoea (p = 0.001). This study confirmed that herd categories based on rumen function are associated with biological outcomes consistent with acidosis. Herds that had a higher risk of lameness also had a much higher risk of being categorised ACID than SO. Herds with a high prevalence of low rumen scores were more likely to be categorised Normal than SO. The findings indicate that differences in rumen metabolism identified for herd categories ACID, SO and Normal were associated with differences in disease risk and physiology. The study also identified an association between pasture feeding and higher faecal scores. This study suggests that there is a challenge for farmers seeking to increase milk production of cows on pasture to maintain the health of cattle.

  11. An Approximation Solution to Refinery Crude Oil Scheduling Problem with Demand Uncertainty Using Joint Constrained Programming

    PubMed Central

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433

  12. An approximation solution to refinery crude oil scheduling problem with demand uncertainty using joint constrained programming.

    PubMed

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.

  13. Identification and Development of Leaders in the Navy Medical Department

    DTIC Science & Technology

    1990-07-20

    multivariate normal population) and the Kaiser - Meyer - Olkin measure of sampling a-equacy (Norusis, 1988b). To allow further analysis of the factors...the Leadership Attribute domain and one from the Leadership Development domain). All factoring procedures passed the Kaiser - Meyer - Olkin Measure of...1.00000 DEVELI .47065 1.00000 MENTOR1 .45338 .69967 1.00000 Kaiser - Meyer - Olkin Measure of Sampling Adequacy = .69463 Bartlett Test of Sphericity

  14. Model-Based Clustering and Data Transformations for Gene Expression Data

    DTIC Science & Technology

    2001-04-30

    transformation parameters, e.g. Andrews, Gnanadesikan , and Warner (1973). Aitchison tests: Aitchison (1986) tested three aspects of the data for...N in the Box-Cox transformation in Equation (5) is estimated by maximum likelihood using the observa- tions (Andrews, Gnanadesikan , and Warner 1973...Compositional Data. Chapman and Hall. Andrews, D. F., R. Gnanadesikan , and J. L. Warner (1973). Methods for assessing multivari- ate normality. In P. R

  15. Catalog of Air Force Weather Technical Documents, 1941-2006

    DTIC Science & Technology

    2006-05-19

    radiosondes in current use in USA. Elementary discussion of statistical terms and concepts used for expressing accuracy or error is discussed. AWS TR 105...Techniques, Appendix B: Vorticity—An Elementary Discussion of the Concept, August 1956, 27pp. Formerly AWSM 105– 50/1A. Provides the necessary back...steps involved in ordinary multiple linear regression. Conditional probability is calculated using transnormalized variables in the multivariate normal

  16. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  17. A Prospective Cohort Study on Radiation-induced Hypothyroidism: Development of an NTCP Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boomsma, Marjolein J.; Bijl, Hendrik P.; Christianen, Miranda E.M.C.

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism. Methods and Materials: The thyroid-stimulating hormone (TSH) level of 105 patients treated with (chemo-) radiation therapy for head-and-neck cancer was prospectively measured during a median follow-up of 2.5 years. Hypothyroidism was defined as elevated serum TSH with decreased or normal free thyroxin (T4). A multivariate logistic regression model with bootstrapping was used to determine the most important prognostic variables for radiation-induced hypothyroidism. Results: Thirty-five patients (33%) developed primary hypothyroidism within 2 years after radiation therapy. An NTCP model based on 2 variables, including the mean thyroidmore » gland dose and the thyroid gland volume, was most predictive for radiation-induced hypothyroidism. NTCP values increased with higher mean thyroid gland dose (odds ratio [OR]: 1.064/Gy) and decreased with higher thyroid gland volume (OR: 0.826/cm{sup 3}). Model performance was good with an area under the curve (AUC) of 0.85. Conclusions: This is the first prospective study resulting in an NTCP model for radiation-induced hypothyroidism. The probability of hypothyroidism rises with increasing dose to the thyroid gland, whereas it reduces with increasing thyroid gland volume.« less

  18. Downregulation of SASH1 correlates with poor prognosis in cervical cancer.

    PubMed

    Xie, J; Zhang, W; Zhang, J; Lv, Q-Y; Luan, Y-F

    2017-10-01

    The aim of this study was to analyze the association of SASH1 expression with clinicopathological features and prognosis in patients suffering cervical cancer. The expressions of SASH1 mRNA and protein in cervical cancer tissues and matched normal cervical tissues were detected by Real-time PCR and Immunohistochemistry. Based on the above findings, the association among SASH1 expression and clinicopathological features was analyzed. Overall survival was evaluated using the Kaplan-Meier method. The variables were used in univariate and multivariate analysis by the Cox proportional hazards model. The results demonstrated that both SASH1 mRNA and proteins were downregulated in cervical cancer tissues compared with those in matched normal tissues (both p < 0.05). Also, decreased SASH1 expression in cervical cancer was found to be significantly associated with high FIGO Stage (p = 0.001), lymph nodes metastasis (p = 0.003) and differentiation (p = 0.018). Furthermore, Kaplan-Meier analysis demonstrated that low SASH1 expression level was associated with poorer overall survival (p < 0.01). Univariate and multivariate analyses indicated that status of SASH1 was an independent prognostic factor for patients with cervical cancer. These findings suggested that SASH1 can be useful as a new prognostic marker and therapeutic target in cervical cancer patients.

  19. Cloze, Discourse, and Approximations to English.

    ERIC Educational Resources Information Center

    Oller, John W., Jr.

    Five orders of approximation to normal English prose were constructed; 5th, 10th, 25th, 50th, and 100th plus. Five cloze tests were then constructed by inserting blanks for deleted words in 5 word segments (5th order), 10 word segments (10th), 25 word segments (25th), 50 word segments (50th), and 100 word segments of five different passages of…

  20. Inelastic neutron scattering spectrum of cyclotrimethylenetrinitramine: a comparison with solid-state electronic structure calculations.

    PubMed

    Ciezak, Jennifer A; Trevino, S F

    2006-04-20

    Solid-state geometry optimizations and corresponding normal-mode analysis of the widely used energetic material cyclotrimethylenetrinitramine (RDX) were performed using density functional theory with both the generalized gradient approximation (BLYP and BP functionals) and the local density approximation (PWC and VWN functionals). The structural results were found to be in good agreement with experimental neutron diffraction data and previously reported calculations based on the isolated-molecule approximation. The vibrational inelastic neutron scattering (INS) spectrum of polycrystalline RDX was measured and compared with simulated INS constructed from the solid-state calculations. The vibrational frequencies calculated from the solid-state methods had average deviations of 10 cm(-1) or less, whereas previously published frequencies based on an isolated-molecule approximation had deviations of 65 cm(-1) or less, illustrating the importance of including crystalline forces. On the basis of the calculations and analysis, it was possible to assign the normal modes and symmetries, which agree well with previous assignments. Four possible "doorway modes" were found in the energy range defined by the lattice modes, which were all found to contain fundamental contributions from rotation of the nitro groups.

  1. Effect of the artificial sweetener, sucralose, on gastric emptying and incretin hormone release in healthy subjects.

    PubMed

    Ma, Jing; Bellon, Max; Wishart, Judith M; Young, Richard; Blackshaw, L Ashley; Jones, Karen L; Horowitz, Michael; Rayner, Christopher K

    2009-04-01

    The incretin hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP), play an important role in glucose homeostasis in both health and diabetes. In mice, sucralose, an artificial sweetener, stimulates GLP-1 release via sweet taste receptors on enteroendocrine cells. We studied blood glucose, plasma levels of insulin, GLP-1, and GIP, and gastric emptying (by a breath test) in 7 healthy humans after intragastric infusions of 1) 50 g sucrose in water to a total volume of 500 ml (approximately 290 mosmol/l), 2) 80 mg sucralose in 500 ml normal saline (approximately 300 mosmol/l, 0.4 mM sucralose), 3) 800 mg sucralose in 500 ml normal saline (approximately 300 mosmol/l, 4 mM sucralose), and 4) 500 ml normal saline (approximately 300 mosmol/l), all labeled with 150 mg 13C-acetate. Blood glucose increased only in response to sucrose (P<0.05). GLP-1, GIP, and insulin also increased after sucrose (P=0.0001) but not after either load of sucralose or saline. Gastric emptying of sucrose was slower than that of saline (t50: 87.4+/-4.1 min vs. 74.7+/-3.2 min, P<0.005), whereas there were no differences in t50 between sucralose 0.4 mM (73.7+/-3.1 min) or 4 mM (76.7+/-3.1 min) and saline. We conclude that sucralose, delivered by intragastric infusion, does not stimulate insulin, GLP-1, or GIP release or slow gastric emptying in healthy humans.

  2. Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation

    PubMed Central

    Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.

    2013-01-01

    Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205

  3. Distribution of curvature of 3D nonrotational surfaces approximating the corneal topography

    NASA Astrophysics Data System (ADS)

    Kasprzak, Henryk T.

    1998-10-01

    The first part of the paper presents the analytical curves used to approximate the corneal profile. Next, some definition of 3D surfaces curvature, like main normal sections, main radii of curvature and their orientations are given. The examples of four nonrotational 3D surfaces such as: ellipsoidal, surface based on hyperbolic cosine function, sphero-cylindrical and toroidal, approximating the corneal topography are proposed. The 3D surface and the contour plots of main radii of curvature and their orientation for four nonrotational approximation of the cornea are shown. Results of calculations are discussed from the point of view of videokeratometric images.

  4. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  5. Method to analyze remotely sensed spectral data

    DOEpatents

    Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE

    2009-02-17

    A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.

  6. High translational energy release in H2 (D2) associative desorption from H (D) chemisorbed on C(0001).

    PubMed

    Baouche, S; Gamborg, G; Petrunin, V V; Luntz, A C; Baurichter, A; Hornekaer, L

    2006-08-28

    Highly energetic translational energy distributions are reported for hydrogen and deuterium molecules desorbing associatively from the atomic chemisorption states on highly oriented pyrolytic graphite (HOPG). Laser assisted associative desorption is used to measure the time of flight of molecules desorbing from a hydrogen (deuterium) saturated HOPG surface produced by atomic exposure from a thermal atom source at around 2100 K. The translational energy distributions normal to the surface are very broad, from approximately 0.5 to approximately 3 eV, with a peak at approximately 1.3 eV. The highest translational energy measured is close to the theoretically predicted barrier height. The angular distribution of the desorbing molecules is sharply peaked along the surface normal and is consistent with thermal broadening contributing to energy release parallel to the surface. All results are in qualitative agreement with recent density functional theory calculations suggesting a lowest energy para-type dimer recombination path.

  7. A Comparison Study of Multivariate Fixed Models and Gene Association with Multiple Traits (GAMuT) for Next-Generation Sequencing

    PubMed Central

    Chiu, Chi-yang; Jung, Jeesun; Wang, Yifan; Weeks, Daniel E.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Amos, Christopher I.; Mills, James L.; Boehnke, Michael; Xiong, Momiao; Fan, Ruzong

    2016-01-01

    In this paper, extensive simulations are performed to compare two statistical methods to analyze multiple correlated quantitative phenotypes: (1) approximate F-distributed tests of multivariate functional linear models (MFLM) and additive models of multivariate analysis of variance (MANOVA), and (2) Gene Association with Multiple Traits (GAMuT) for association testing of high-dimensional genotype data. It is shown that approximate F-distributed tests of MFLM and MANOVA have higher power and are more appropriate for major gene association analysis (i.e., scenarios in which some genetic variants have relatively large effects on the phenotypes); GAMuT has higher power and is more appropriate for analyzing polygenic effects (i.e., effects from a large number of genetic variants each of which contributes a small amount to the phenotypes). MFLM and MANOVA are very flexible and can be used to perform association analysis for: (i) rare variants, (ii) common variants, and (iii) a combination of rare and common variants. Although GAMuT was designed to analyze rare variants, it can be applied to analyze a combination of rare and common variants and it performs well when (1) the number of genetic variants is large and (2) each variant contributes a small amount to the phenotypes (i.e., polygenes). MFLM and MANOVA are fixed effect models which perform well for major gene association analysis. GAMuT can be viewed as an extension of sequence kernel association tests (SKAT). Both GAMuT and SKAT are more appropriate for analyzing polygenic effects and they perform well not only in the rare variant case, but also in the case of a combination of rare and common variants. Data analyses of European cohorts and the Trinity Students Study are presented to compare the performance of the two methods. PMID:27917525

  8. Signal to noise ratio of energy selective x-ray photon counting systems with pileup.

    PubMed

    Alvarez, Robert E

    2014-11-01

    To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, "Near optimal energy selective x-ray imaging system performance with simple detectors," Med. Phys. 37, 822-841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr-Rao lower bound (CRLB) for larger counts is tested. The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated.

  9. Signal to noise ratio of energy selective x-ray photon counting systems with pileup

    PubMed Central

    Alvarez, Robert E.

    2014-01-01

    Purpose: To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. Methods: An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, “Near optimal energy selective x-ray imaging system performance with simple detectors,” Med. Phys. 37, 822–841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr–Rao lower bound (CRLB) for larger counts is tested. Results: The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. Conclusions: The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated. PMID:25370642

  10. Low Body Mass Index, Serum Creatinine, and Cause of Death in Patients Undergoing Percutaneous Coronary Intervention.

    PubMed

    Goel, Kashish; Gulati, Rajiv; Reeder, Guy S; Lennon, Ryan J; Lewis, Bradley R; Behfar, Atta; Sandhu, Gurpreet S; Rihal, Charanjit S; Singh, Mandeep

    2016-10-31

    Low body mass index (BMI) and serum creatinine are surrogate markers of frailty and sarcopenia. Their relationship with cause-specific mortality in elderly patients undergoing percutaneous coronary intervention is not well studied. We determined long-term cardiovascular and noncardiovascular mortality in 9394 consecutive patients aged ≥65 years who underwent percutaneous coronary intervention from 2000 to 2011. BMI and serum creatinine were divided into 4 categories. During a median follow-up of 4.2 years (interquartile range 1.8-7.3 years), 3243 patients (33.4%) died. In the multivariable model, compared with patients with normal BMI, patients with low BMI had significantly increased all-cause mortality (hazard ratio [HR] 1.4, 95% CI 1.1-1.7), which was related to both cardiovascular causes (HR 1.4, 95% CI 1.0-1.8) and noncardiovascular causes (HR 1.4, 95% CI 1.06-1.9). Compared with normal BMI, significant reduction was noted in patients who were overweight and obese in terms of cardiovascular mortality (overweight: HR 0.77, 95% CI 0.67-0.88; obese: HR 0.80, 95% CI 0.70-0.93) and noncardiovascular mortality (overweight: HR 0.85, 95% CI 0.74-0.97; obese: HR 0.82, 95% CI 0.72-0.95). In a multivariable model, in patients with normal BMI, low creatinine (≤0.70 mg/dL) was significantly associated with increased all-cause mortality (HR 1.8, 95% CI 1.3-2.5) and cardiovascular mortality (HR 2.3, 95% CI 1.4-3.8) compared with patients with normal creatinine (0.71-1.0 mg/dL); however, this was not observed in other BMI categories. We identified a new subgroup of patients with low serum creatinine and normal BMI that was associated with increased all-cause mortality and cardiovascular mortality in elderly patients undergoing percutaneous coronary intervention. Low BMI was associated with increased cardiovascular and noncardiovascular mortality. Nutritional support, resistance training, and weight-gain strategies may have potential roles for these patients undergoing percutaneous coronary intervention. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  11. Anharmonic vibrations around a triaxial nuclear deformation “frozen” to γ = 30°

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buganu, Petrica, E-mail: buganu@theory.nipne.ro; Budaca, Radu

    2015-12-07

    The Davydov-Chaban Hamiltonian with a sextic oscillator potential for the variable β and γ fixed to 30° is exactly solved for the ground and β bands and approximately for the γ band. The model is called Z(4)-Sextic in connection with the already established Z(4) solution. The energy spectra, normalized to the energy of the first excited state, and several B(E2) transition probabilities, normalized to the B(E2) transition from the first excited state to the ground state, depend on a single parameter α. By varying α within a sufficiently large interval, a shape phase transition from an approximately spherical shape tomore » a deformed one is evidenced.« less

  12. Physical models for the normal YORP and diurnal Yarkovsky effects

    NASA Astrophysics Data System (ADS)

    Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.

    2016-06-01

    We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  15. High-frequency Born synthetic seismograms based on coupled normal modes

    USGS Publications Warehouse

    Pollitz, F.

    2011-01-01

    High-frequency and full waveform synthetic seismograms on a 3-D laterally heterogeneous earth model are simulated using the theory of coupled normal modes. The set of coupled integral equations that describe the 3-D response are simplified into a set of uncoupled integral equations by using the Born approximation to calculate scattered wavefields and the pure-path approximation to modulate the phase of incident and scattered wavefields. This depends upon a decomposition of the aspherical structure into smooth and rough components. The uncoupled integral equations are discretized and solved in the frequency domain, and time domain results are obtained by inverse Fourier transform. Examples show the utility of the normal mode approach to synthesize the seismic wavefields resulting from interaction with a combination of rough and smooth structural heterogeneities. This approach is applied to an ~4 Hz shallow crustal wave propagation around the site of the San Andreas Fault Observatory at Depth (SAFOD). ?? The Author Geophysical Journal International ?? 2011 RAS.

  16. Tolrestat kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, D.R.; Kraml, M.; Cayen, M.N.

    The kinetics of tolrestat, a potent inhibitor of aldose reductase, were examined. Serum concentrations of tolrestat and of total /sup 14/C were measured after dosing normal subjects and subjects with diabetes with /sup 14/C-labeled tolrestat. In normal subjects, tolrestat was rapidly absorbed and disappearance from serum was biphasic. Distribution and elimination t 1/2s were approximately 2 and 10 to 12 hr, respectively, after single and multiple doses. Unchanged tolrestat accounted for the major portion of /sup 14/C in serum. Radioactivity was rapidly and completely excreted in urine and feces in an approximate ratio of 2:1. Findings were much the samemore » in subjects with diabetes. In normal subjects, the kinetics of oral tolrestat were independent of dose in the 10 to 800 mg range. Repetitive dosing did not result in unexpected cumulation. Tolrestat was more than 99% bound to serum protein; it did not compete with warfarin for binding sites but was displaced to some extent by high concentrations of tolbutamide or salicylate.« less

  17. Method for characterization of a spherically bent crystal for K.alpha. X-ray imaging of laser plasmas using a focusing monochromator geometry

    DOEpatents

    Kugland, Nathan; Doeppner, Tilo; Glenzer, Siegfried; Constantin, Carmen; Niemann, Chris; Neumayer, Paul

    2015-04-07

    A method is provided for characterizing spectrometric properties (e.g., peak reflectivity, reflection curve width, and Bragg angle offset) of the K.alpha. emission line reflected narrowly off angle of the direct reflection of a bent crystal and in particular of a spherically bent quartz 200 crystal by analyzing the off-angle x-ray emission from a stronger emission line reflected at angles far from normal incidence. The bent quartz crystal can therefore accurately image argon K.alpha. x-rays at near-normal incidence (Bragg angle of approximately 81 degrees). The method is useful for in-situ calibration of instruments employing the crystal as a grating by first operating the crystal as a high throughput focusing monochromator on the Rowland circle at angles far from normal incidence (Bragg angle approximately 68 degrees) to make a reflection curve with the He-like x-rays such as the He-.alpha. emission line observed from a laser-excited plasma.

  18. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I + II + III supernatant in human albumin separation

    NASA Astrophysics Data System (ADS)

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-01

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I + II + III (FI + II + III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (Rp2), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501 g/L, 0.465 g/L and 5.57 for TP, and 0.969, 0.530 g/L, 0.341 g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI + II + III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS.

  19. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.

  20. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I+II+III supernatant in human albumin separation.

    PubMed

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-15

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I+II+III (FI+II+III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (R p 2 ), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501g/L, 0.465g/L and 5.57 for TP, and 0.969, 0.530g/L, 0.341g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI+II+III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Recent work on material interface reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosso, S.J.; Swartz, B.K.

    1997-12-31

    For the last 15 years, many Eulerian codes have relied on a series of piecewise linear interface reconstruction algorithms developed by David Youngs. In a typical Youngs` method, the material interfaces were reconstructed based upon nearly cell values of volume fractions of each material. The interfaces were locally represented by linear segments in two dimensions and by pieces of planes in three dimensions. The first step in such reconstruction was to locally approximate an interface normal. In Youngs` 3D method, a local gradient of a cell-volume-fraction function was estimated and taken to be the local interface normal. A linear interfacemore » was moved perpendicular to the now known normal until the mass behind it matched the material volume fraction for the cell in question. But for distorted or nonorthogonal meshes, the gradient normal estimate didn`t accurately match that of linear material interfaces. Moreover, curved material interfaces were also poorly represented. The authors will present some recent work in the computation of more accurate interface normals, without necessarily increasing stencil size. Their estimate of the normal is made using an iterative process that, given mass fractions for nearby cells of known but arbitrary variable density, converges in 3 or 4 passes in practice (and quadratically--like Newton`s method--in principle). The method reproduces a linear interface in both orthogonal and nonorthogonal meshes. The local linear approximation is generally 2nd-order accurate, with a 1st-order accurate normal for curved interfaces in both two and three dimensional polyhedral meshes. Recent work demonstrating the interface reconstruction for curved surfaces will /be discussed.« less

  2. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  3. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    ERIC Educational Resources Information Center

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  4. 77 FR 58369 - York Haven Power Company, LLC; Notice of Application Tendered for Filing With the Commission and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ... end of the headrace where it runs diagonally across the main channel of the river approximately 4,970... not used under normal run-of-river operation. The normal water surface elevation of the project...-3 are vertical-shaft, fixed-blade, Kaplan turbines; unit 4 is a vertical-shaft, manually adjustable...

  5. 78 FR 26345 - York Haven Power Company, LLC; Notice of Application Accepted for Filing, Soliciting Motions To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-06

    ... it runs diagonally across the main channel of the river approximately 4,970 feet to the west shore of... normal run-of-river operation. The normal water surface elevation of the project impoundment is 276.5... appurtenant equipment. The hydraulic equipment for units 1-3 are vertical-shaft, fixed-blade, Kaplan turbines...

  6. Category-Contingent Face Adaptation for Novel Colour Categories: Contingent Effects Are Seen Only after Social or Meaningful Labelling

    ERIC Educational Resources Information Center

    Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.

    2011-01-01

    A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete…

  7. 75 FR 78985 - County of DuPage; Notice of Preliminary Permit Application Accepted for Filing and Soliciting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ... (upper reservoir) having a total storage capacity of 8,145 acre- feet at a normal maximum operating... reservoir) 250 feet below the bottom of the upper reservoir having a total/usable storage capacity of 7,465 acre-feet at normal maximum operation elevation of 210 feet msl; (5) a powerhouse with approximate...

  8. Multivariate reference technique for quantitative analysis of fiber-optic tissue Raman spectroscopy.

    PubMed

    Bergholt, Mads Sylvest; Duraipandian, Shiyamala; Zheng, Wei; Huang, Zhiwei

    2013-12-03

    We report a novel method making use of multivariate reference signals of fused silica and sapphire Raman signals generated from a ball-lens fiber-optic Raman probe for quantitative analysis of in vivo tissue Raman measurements in real time. Partial least-squares (PLS) regression modeling is applied to extract the characteristic internal reference Raman signals (e.g., shoulder of the prominent fused silica boson peak (~130 cm(-1)); distinct sapphire ball-lens peaks (380, 417, 646, and 751 cm(-1))) from the ball-lens fiber-optic Raman probe for quantitative analysis of fiber-optic Raman spectroscopy. To evaluate the analytical value of this novel multivariate reference technique, a rapid Raman spectroscopy system coupled with a ball-lens fiber-optic Raman probe is used for in vivo oral tissue Raman measurements (n = 25 subjects) under 785 nm laser excitation powers ranging from 5 to 65 mW. An accurate linear relationship (R(2) = 0.981) with a root-mean-square error of cross validation (RMSECV) of 2.5 mW can be obtained for predicting the laser excitation power changes based on a leave-one-subject-out cross-validation, which is superior to the normal univariate reference method (RMSE = 6.2 mW). A root-mean-square error of prediction (RMSEP) of 2.4 mW (R(2) = 0.985) can also be achieved for laser power prediction in real time when we applied the multivariate method independently on the five new subjects (n = 166 spectra). We further apply the multivariate reference technique for quantitative analysis of gelatin tissue phantoms that gives rise to an RMSEP of ~2.0% (R(2) = 0.998) independent of laser excitation power variations. This work demonstrates that multivariate reference technique can be advantageously used to monitor and correct the variations of laser excitation power and fiber coupling efficiency in situ for standardizing the tissue Raman intensity to realize quantitative analysis of tissue Raman measurements in vivo, which is particularly appealing in challenging Raman endoscopic applications.

  9. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  10. Effect of altered sensory conditions on multivariate descriptors of human postural sway

    NASA Technical Reports Server (NTRS)

    Kuo, A. D.; Speers, R. A.; Peterka, R. J.; Horak, F. B.; Peterson, B. W. (Principal Investigator)

    1998-01-01

    Multivariate descriptors of sway were used to test whether altered sensory conditions result not only in changes in amount of sway but also in postural coordination. Eigenvalues and directions of eigenvectors of the covariance of shnk and hip angles were used as a set of multivariate descriptors. These quantities were measured in 14 healthy adult subjects performing the Sensory Organization test, which disrupts visual and somatosensory information used for spatial orientation. Multivariate analysis of variance and discriminant analysis showed that resulting sway changes were at least bivariate in character, with visual and somatosensory conditions producing distinct changes in postural coordination. The most significant changes were found when somatosensory information was disrupted by sway-referencing of the support surface (P = 3.2 x 10(-10)). The resulting covariance measurements showed that subjects not only swayed more but also used increased hip motion analogous to the hip strategy. Disruption of vision, by either closing the eyes or sway-referencing the visual surround, also resulted in altered sway (P = 1.7 x 10(-10)), with proportionately more motion of the center of mass than with platform sway-referencing. As shown by discriminant analysis, an optimal univariate measure could explain at most 90% of the behavior due to altered sensory conditions. The remaining 10%, while smaller, are highly significant changes in posture control that depend on sensory conditions. The results imply that normal postural coordination of the trunk and legs requires both somatosensory and visual information and that each sensory modality makes a unique contribution to posture control. Descending postural commands are multivariate in nature, and the motion at each joint is affected uniquely by input from multiple sensors.

  11. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. II: Multivariate spike and slab priors for marker effects and derivation of approximate Bayes and fractional Bayes factors for the complete family of models.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    This study corresponds to the second part of a companion paper devoted to the development of Bayesian multiple regression models accounting for randomness of genotypes in across population genome-wide prediction. This family of models considers heterogeneous and correlated marker effects and allelic frequencies across populations, and has the ability of considering records from non-genotyped individuals and individuals with missing genotypes in any subset of loci without the need for previous imputation, taking into account uncertainty about imputed genotypes. This paper extends this family of models by considering multivariate spike and slab conditional priors for marker allele substitution effects and contains derivations of approximate Bayes factors and fractional Bayes factors to compare models from part I and those developed here with their null versions. These null versions correspond to simpler models ignoring heterogeneity of populations, but still accounting for randomness of genotypes. For each marker loci, the spike component of priors corresponded to point mass at 0 in R S , where S is the number of populations, and the slab component was a S-variate Gaussian distribution, independent conditional priors were assumed. For the Gaussian components, covariance matrices were assumed to be either the same for all markers or different for each marker. For null models, the priors were simply univariate versions of these finite mixture distributions. Approximate algebraic expressions for Bayes factors and fractional Bayes factors were found using the Laplace approximation. Using the simulated datasets described in part I, these models were implemented and compared with models derived in part I using measures of predictive performance based on squared Pearson correlations, Deviance Information Criterion, Bayes factors, and fractional Bayes factors. The extensions presented here enlarge our family of genome-wide prediction models making it more flexible in the sense that it now offers more modeling options. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Transonic aerodynamic characteristics of the 10-percent-thick NASA supercritical airfoil 31

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1975-01-01

    Refinements in a 10 percent thick supercritical airfoil (airfoil 31) have produced significant improvements in the drag characteristics compared with those for an earlier supercritical airfoil (airfoil 12) designed for the same normal force coefficient of 0.7. Drag creep was practically eliminated at normal force coefficients between about 0.4 and 0.7 and was greatly reduced at other normal force coefficients. Substantial reductions in the drag levels preceding drag divergence were also achieved at all normal force coefficients. The Mach numbers at which drag diverges were delayed for airfoil 31 at normal force coefficients up to about 0.6 (by approximately 0.01 and 0.02 at normal force coefficients of 0.4 and 0.6, respectively) but drag divergence occurred at slightly lower Mach numbers at higher normal force coefficients.

  13. Understanding characteristics in multivariate traffic flow time series from complex network structure

    NASA Astrophysics Data System (ADS)

    Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei

    2017-07-01

    Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.

  14. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  15. An improvement of drought monitoring through the use of a multivariate magnitude index

    NASA Astrophysics Data System (ADS)

    Real-Rangel, R. A.; Alcocer-Yamanaka, V. H.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.; Ocón-Gutiérrez, A. R.

    2017-12-01

    In drought monitoring activities it is widely acknowledged that the severity of an event is determined in relation to monthly values of univariate indices of one or more hydrological variables. Normally, these indices are estimated using temporal windows from 1 to 12 months or more to aggregate the effects of deficits in the variable of interest. However, the use of these temporal windows may lead to a misperception of both, the drought event intensity and the timing of its occurrence. In this context, this work presents the implementation of a trivariate drought magnitude index, considering key hydrological variables (e.g., precipitation, soil moisture and runoff) using for this the framework of the Multivariate Standardized Drought Index (MSDI). Despite the popularity and simplicity of the concept of drought magnitude for standardized drought indices, its implementation in drought monitoring and early warning systems has not been reported. This approach has been tested for operational purposes in the recently launched Multivariate Drought Monitor of Mexico (MOSEMM) and the results shows that the inclusion of a Magnitude index facilitates the drought detection and, thus, improves the decision making process for emergency managers.

  16. Self-Critical, and Robust, Procedures for the Analysis of Multivariate Normal Data.

    DTIC Science & Technology

    1982-06-01

    Influence Functions The influence function is the most important tt of qual- itative zobustness since many other robustness characteristics of an estimator...may be derived from it. The influence function characterizes the (asymptotic) response of an estimator to an additional observation as a function of...the influence function be bounded. It is also advantageous, in our opinion, if the influence functions are re-descending to zero. The influence function for

  17. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  18. Aerodynamic characteristics of an improved 10-percent-thick NASA supercritical airfoil. [Langley 8 foot transonic tunnel tests

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1974-01-01

    Refinements in a 10 percent thick supercritical airfoil produced improvements in the overall drag characteristics at normal force coefficients from about 0.30 to 0.65 compared with earlier supercritical airfoils which were developed for a normal force coefficient of 0.7. The drag divergence Mach number of the improved supercritical airfoil (airfoil 26a) varied from approximately 0.82 at a normal force coefficient to of 0.30, to 0.78 at a normal force coefficient of 0.80 with no drag creep evident. Integrated section force and moment data, surface pressure distributions, and typical wake survey profiles are presented.

  19. The Kepler Light Curves of V1504 Cygni and V344 Lyrae: A Study of the Outburst Properties

    NASA Technical Reports Server (NTRS)

    Cannizzo, John K.; Smale, Alan P.; Still, Martin D.; Wood, Matt A.; Howell, Steve B.

    2011-01-01

    We examine the Kepler light curves of V1504 Cyg and V344 Lyr, encompassing approximately 460 d at 1 min cadence. During this span each system exhibited approximately 40 outbursts, including four superoutbursts. We find that, in both systems, the normal outbursts lying between two superoutbursts increase in duration by a factor approximately 1.2 - 1.7, and then reset to a small value after the following superoutburst. In V344 Lyr the trend of quiescent intervals between normal outbursts is to increase to a local maximum about half way through the supercycle the interval from one superoutburst to the next - and then to decrease back to a small value by the time of the next superoutburst, whereas for V1504 Cyg the quiescent intervals are relatively constant during the supercycle. Both of these trends are inconsistent with the Osaki's thermal-tidal model, which robustly predicts a secular increase in the quiescent intervals between normal outbursts during a supercycle. Also, most of the normal outbursts have an asymmetric, fast-rise/slower-decline shape, which would be consistent with outbursts triggered at large radii. The exponential rate of decay of the plateau phase of the superoutbursts is 8 d mag(sup -1) for approximately 1504 Cyg and 12 d mag(sup -1) for V344 Lyr. This time scale gives a direct measure of the VISCOUS time scale III the outer accretion disk given the expectation that the entire disk is in the hot, viscous state during superoutburst. The resulting constraint on the Shakura-Sunyaev parameter, alpha(sub hot) approximately equal to 0.1, is consistent with the value inferred from the fast dwarf nova decays. By looking at the slow decay rate for superoutbursts, which occur in systems below the period gap, in combination with the slow decay rate in one long outburst above the period gap (in U Gem), we infer a steep dependence of the decay rate on orbital period for long outbursts. We argue that this relation implies a steep dependence of alpha(sub cold) on orbital period, which may be consistent with recent findings of Patterson, and is consistent with tidal torquing as being the dominant angular momentum transport mechanism in quiescent disks in interacting binary systems.

  20. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  1. Stability Investigation of a Blunted Cone and a Blunted Ogive with a Flared Cylinder Afterbody at Mach Numbers from 0.30 to 2.85

    NASA Technical Reports Server (NTRS)

    Coltrane, Lucille C.

    1959-01-01

    A cone with a blunt nose tip and a 10.7 deg cone half angle and an ogive with a blunt nose tip and a 20 deg flared cylinder afterbody have been tested in free flight over a Mach number range of 0.30 to 2.85 and a Reynolds number range of 1 x 10(exp 6) to 23 x 10(exp 6). Time histories, cross plots of force and moment coefficients, and plots of the longitudinal force,coefficient, rolling velocity, aerodynamic center, normal- force-curve slope, and dynamic stability are presented. With the center-of-gravity location at about 50 percent of the model length, the models were both statically and dynamically stable throughout the Mach number range. For the cone, the average aerodynamic center moved slightly forward with decreasing speeds and the normal-force-curve slope was fairly constant throughout the speed range. For the ogive, the average aerodynamic center remained practically constant and the normal-force-curve slope remained practically constant to a Mach number of approximately 1.6 where a rising trend is noted. Maximum drag coefficient for the cone, with reference to the base area, was approximately 0.6, and for the ogive, with reference to the area of the cylindrical portion, was approximately 2.1.

  2. Metabolic phenotype and risk of colorectal cancer in normal-weight postmenopausal women

    PubMed Central

    Liang, Xiaoyun; Margolis, Karen L.; Hendryx, Michael; Rohan, Thomas; Groessl, Erik J.; Thomson, Cynthia A.; Kroenke, Candyce H.; Simon, Michael; Lane, Dorothy; Stefanick, Marcia; Luo, Juhua

    2016-01-01

    Background The prevalence of metabolically unhealthy phenotype in normal-weight adults is 30%, and few studies have explored the association between metabolic phenotype and colorectal cancer incidence in normal-weight individuals. Our aim was to compare the risk of colorectal cancer in normal-weight postmenopausal women who were characterized by either the metabolically healthy phenotype or the metabolically unhealthy phenotype. Methods A large prospective cohort, the Women’s Health Initiative (WHI), was used. The analytical sample included 5,068 postmenopausal women with BMI 18.5–<25 kg/m2. Metabolic phenotype was defined using the Adult Treatment Panel-III (ATP-III) definition, excluding waist circumference; therefore, women with one or none of the four components (elevated triglycerides, low HDL-C, elevated blood pressure, and elevated fasting glucose) were classified as metabolically healthy. Multivariable Cox proportional hazards regression was used to estimate adjusted hazard ratios for the association between metabolic phenotype and risk of colorectal cancer. Results Among normal-weight women, those who were metabolically unhealthy had higher risks of colorectal cancer (HR: 1.49, 95% CI: 1.02–2.18) compared to those who were metabolically healthy. Conclusions A metabolically unhealthy phenotype was associated with higher risk of colorectal cancer among normal-weight women. Impact Normal-weight women should still be evaluated for metabolic health and appropriate steps taken to reduce their risk of colorectal cancer. PMID:28148595

  3. Feasibility of novel four degrees of freedom capacitive force sensor for skin interface force

    PubMed Central

    2012-01-01

    Background The objective of our study was to develop a novel capacitive force sensor that enables simultaneous measurements of yaw torque around the pressure axis and normal force and shear forces at a single point for the purpose of elucidating pressure ulcer pathogenesis and establishing criteria for selection of cushions and mattresses. Methods Two newly developed sensors (approximately 10 mm×10 mm×5 mm (10) and 20 mm×20 mm×5 mm (20)) were constructed from silicone gel and four upper and lower electrodes. The upper and lower electrodes had sixteen combinations that had the function as capacitors of parallel plate type. The full scale (FS) ranges of force/torque were defined as 0–1.5 N, –0.5-0.5 N and −1.5-1.5 N mm (10) and 0–8.7 N, –2.9-2.9 N and −16.8-16.8 N mm (20) in normal force, shear forces and yaw torque, respectively. The capacitances of sixteen capacitors were measured by an LCR meter (AC1V, 100 kHz) when displacements corresponding to four degrees of freedom (DOF) forces within FS ranges were applied to the sensor. The measurement was repeated three times in each displacement condition (10 only). Force/torque were calculated by corrected capacitance and were evaluated by comparison to theoretical values and standard normal force measured by an universal tester. Results In measurements of capacitance, the coefficient of variation was 3.23% (10). The Maximum FS errors of estimated force/torque were less than or equal to 10.1 (10) and 16.4% (20), respectively. The standard normal forces were approximately 1.5 (10) and 9.4 N (20) when pressure displacements were 3 (10) and 2 mm (20), respectively. The estimated normal forces were approximately 1.5 (10) and 8.6 N (10) in the same condition. Conclusions In this study, we developed a new four DOF force sensor for measurement of force/torque that occur between the skin and a mattress. In measurement of capacitance, the repeatability was good and it was confirmed that the sensor had characteristics that enabled the correction by linear approximation for adjustment of gain and offset. In estimation of forces/torque, we considered accuracy to be within an acceptable range. PMID:23186069

  4. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

  5. Discrimination of premalignant lesions and cancer tissues from normal gastric tissues using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Luo, Shuwen; Chen, Changshui; Mao, Hua; Jin, Shaoqin

    2013-06-01

    The feasibility of early detection of gastric cancer using near-infrared (NIR) Raman spectroscopy (RS) by distinguishing premalignant lesions (adenomatous polyp, n=27) and cancer tissues (adenocarcinoma, n=33) from normal gastric tissues (n=45) is evaluated. Significant differences in Raman spectra are observed among the normal, adenomatous polyp, and adenocarcinoma gastric tissues at 936, 1003, 1032, 1174, 1208, 1323, 1335, 1450, and 1655 cm-1. Diverse statistical methods are employed to develop effective diagnostic algorithms for classifying the Raman spectra of different types of ex vivo gastric tissues, including principal component analysis (PCA), linear discriminant analysis (LDA), and naive Bayesian classifier (NBC) techniques. Compared with PCA-LDA algorithms, PCA-NBC techniques together with leave-one-out, cross-validation method provide better discriminative results of normal, adenomatous polyp, and adenocarcinoma gastric tissues, resulting in superior sensitivities of 96.3%, 96.9%, and 96.9%, and specificities of 93%, 100%, and 95.2%, respectively. Therefore, NIR RS associated with multivariate statistical algorithms has the potential for early diagnosis of gastric premalignant lesions and cancer tissues in molecular level.

  6. Oxidative Stress, Antioxidant Status and Neurodevelopmental Outcome in Neonates Born to Pre-eclamptic Mothers.

    PubMed

    Bharadwaj, Shruthi K; Vishnu Bhat, B; Vickneswaran, V; Adhisivam, B; Bobby, Zachariah; Habeebullah, S

    2018-05-01

    To measure the oxidative stress and antioxidant status in preeclamptic mother-newborn dyads and correlate them with neurodevelopmental outcome at one year of corrected age. This cohort study conducted in a tertiary care teaching hospital, south India included 71 preeclamptic and 72 normal mother-newborn dyads. Biochemical parameters including total antioxidant status (TAS), protein carbonyls and malondialdehyde levels (MDA) were measured in both maternal and cord blood. Infants in both the groups were followed up to one year of corrected age and neurodevelopmental assessment was done using Developmental Assessment Scale for Indian Infants (DASII). Correlation and multivariate regression analysis was done to evaluate the oxidative stress markers in relation to neurodevelopmental outcome. All oxidative stress markers were higher in maternal and cord blood of pre-ecclampsia group compared to the normal group. Maternal Total antioxidant status (M-TAS) was lower in pre-eclampsia group than normal group. More neonates in the pre-ecclampsia group were preterm and intrauterine growth restriction (IUGR) and had higher incidence of morbidities like respiratory distress syndrome (RDS) and early onset sepsis (EOS). Infants in the preeclampsia group had lower motor age, motor score and motor developmental quotient (MoDQ). On multivariate logistic regression analyses, lower M-TAS levels were strongly associated with poor neuro-motor outcomes at 1 y of corrected age. Maternal TAS with a cut-off value of 0.965 mmol/L had a sensitivity of 77.8% and specificity of 55.3% in predicting MoDQ <70 at one year corrected age in infants born to preeclamptic mothers. Oxidative stress is increased in preeclamptic mother-newborn dyads. Low maternal TAS levels are associated with poor neuro-motor outcomes. Maternal TAS in preeclampsia is useful in predicting poor motor development at one year corrected age.

  7. Paternal Metabolic and Cardiovascular Risk Factors for Fetal Growth Restriction

    PubMed Central

    Hillman, Sara; Peebles, Donald M.; Williams, David J.

    2013-01-01

    OBJECTIVE Fathers of low–birth weight offspring are more likely to have type 2 diabetes and cardiovascular disease in later life. We investigated whether paternal insulin resistance and cardiovascular risk factors were evident at the time that fetal growth–restricted offspring were born. RESEARCH DESIGN AND METHODS We carried out a case-control study of men who fathered pregnancies affected by fetal growth restriction, in the absence of recognized fetal disease (n = 42), compared with men who fathered normal–birth weight offspring (n = 77). All mothers were healthy, nonsmoking, and similar in age, BMI, ethnicity, and parity. Within 4 weeks of offspring birth, all fathers had measures of insulin resistance (HOMA index), blood pressure, waist circumference, endothelial function (flow-mediated dilatation), lipid profile, weight, and smoking habit. Comparison was made using multivariable logistical regression analysis. RESULTS Fathers of fetal growth–restricted offspring [mean (SD) 1.8th (2.2) customized birth centile] were more likely to have insulin resistance, hypertension, central adiposity, and endothelial dysfunction and to smoke cigarettes compared with fathers of normal grown offspring. After multivariable analysis, paternal insulin resistance and smoking remained different between the groups. Compared with fathers of normal grown offspring, men who fathered pregnancies affected by fetal growth restriction had an OR 7.68 (95% CI 2.63–22.40; P < 0.0001) of having a 1-unit higher log HOMA-IR value and 3.39 (1.26–9.16; P = 0.016) of being a smoker. CONCLUSIONS Men who recently fathered growth-restricted offspring have preclinical evidence of the insulin resistance syndrome and are more likely to smoke than fathers of normal grown offspring. Paternal lifestyle may influence heritable factors important for fetal growth. PMID:23315598

  8. Is NAA reduction in normal contralateral cerebral tissue in stroke patients dependent on underlying risk factors?

    PubMed

    Walker, P M; Ben Salem, D; Giroud, M; Brunotte, F

    2006-05-01

    This retrospective study investigated the dependence of N-acetyl aspartate (NAA) ratios on risk factors for cerebral vasculopathy such as sex, age, hypertension, diabetes mellitus, carotid stenosis, and dyslipidaemia, which may have affected brain vessels and induced metabolic brain abnormalities prior to stroke. We hypothesise that in stroke patients metabolic alterations in the apparently normal contralateral brain are dependent on the presence or not of such risk factors. Fifty nine patients (31 male, 28 female: 58.8+/-16.1 years old) with cortical middle cerebral artery (MCA) territory infarction were included. Long echo time chemical shift imaging spectroscopy was carried out on a Siemens 1.5 T Magnetom Vision scanner using a multi-voxel PRESS technique. Metabolite ratios (NAA/choline, NAA/creatine, lactate/choline, etc) were studied using uni- and multivariate analyses with respect to common risk factors. The influence of age, stroke lesion size, and time since stroke was studied using a linear regression approach. Age, sex, and hypertension all appeared to individually influence metabolite ratios, although only hypertension was significant after multivariate analysis. In both basal ganglia and periventricular white matter regions in apparently normal contralateral brain, the NAA/choline ratio was significantly lower in hypertensive (1.37+/-0.16 and 1.50+/-0.19, respectively) than in normotensive patients (1.72+/-0.19 and 1.85+/-0.15, respectively). Regarding MCA infarction, contralateral tissue remote from the lesion behaves abnormally in the presence of hypertension, the NAA ratios in hypertensive patients being significantly lower. These data suggest that hypertension may compromise the use of contralateral tissue data as a reference for comparison with ischaemic tissue.

  9. Prediction of vascular abnormalities on CT angiography in patients with acute headache.

    PubMed

    Alons, Imanda M E; Goudsmit, Ben F J; Jellema, Korné; van Walderveen, Marianne A A; Wermer, Marieke J H; Algra, Ale

    2018-05-09

    Patients with acute headache increasingly undergo CT-angiography (CTA) to evaluate underlying vascular causes. The aim of this study is to determine clinical and non-contrast CT (NCCT) criteria to select patients who might benefit from CTA. We retrospectively included patients with acute headache who presented to the emergency department of an academic medical center and large regional teaching hospital and underwent NCCT and CTA. We identified factors that increased the probability of finding a vascular abnormality on CTA, performed multivariable regression analyses and determined discrimination with the c-statistic. A total of 384 patients underwent NCCT and CTA due to acute headache. NCCT was abnormal in 194 patients. Among these, we found abnormalities in 116 cases of which 99 aneurysms. In the remaining 190 with normal NCCT we found abnormalities in 12 cases; four unruptured aneurysms, three cerebral venous thrombosis', two reversible cerebral vasoconstriction syndromes, two cervical arterial dissections and one cerebellar infarction. In multivariable analysis abnormal NCCT, lowered consciousness and presentation within 6 hr of headache onset were independently associated with abnormal CTA. The c-statistic of abnormal NCCT alone was 0.80 (95% CI: 0.75-0.80), that also including the other two variables was 0.84 (95% CI: 0.80-0.88). If NCCT was normal no other factors could help identify patients at risk for abnormalities. In patients with acute headache abnormal NCCT is the strongest predictor of a vascular abnormality on CTA. If NCCT is normal no other predictors increase the probability of finding an abnormality on CTA and diagnostic yield is low. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  10. Doctor shopping by overweight and obese patients is associated with increased healthcare utilization.

    PubMed

    Gudzune, Kimberly A; Bleich, Sara N; Richards, Thomas M; Weiner, Jonathan P; Hodges, Krista; Clark, Jeanne M

    2013-07-01

    Negative interactions with healthcare providers may lead patients to switch physicians or "doctor shop." We hypothesized that overweight and obese patients would be more likely to doctor shop, and as a result, have increased rates of emergency department (ED) visits and hospitalizations as compared to normal weight nonshoppers. We combined claims data from a health plan in one state with information from beneficiaries' health risk assessments. The primary outcome was "doctor shopping," which we defined as having outpatient claims with ≥5 different primary care physicians (PCPs) during a 24-month period. The independent variable was standard NIH categories of weight by BMI. We performed multivariate logistic regression to evaluate the association between weight categories and doctor shopping. We conducted multivariate zero-inflated negative binominal regression to evaluate the association between weight-doctor shopping categories with counts of ED visits and hospitalizations. Of the 20,726 beneficiaries, the mean BMI was 26.3 kg m(-2) (SD 5.1), mean age was 44.4 years (SD 11.1) and 53% were female. As compared to normal weight beneficiaries, overweight beneficiaries had 23% greater adjusted odds of doctor shopping (OR 1.23, 95%CI 1.04-1.46) and obese beneficiaries had 52% greater adjusted odds of doctor shopping (OR 1.52, 95%CI 1.26-1.82). As compared to normal weight non-shoppers, overweight and obese shoppers had higher rates of ED visits (IRR 1.85, 95%CI 1.37-2.45; IRR 1.83, 95%CI 1.34-2.50, respectively), which persisted during within weight group comparisons (Overweight IRR 1.50, 95%CI 1.10-2.03; Obese IRR 1.54, 95%CI 1.12-2.11). Frequently changing PCPs may impair continuity and result in increased healthcare utilization. Copyright © 2012 The Obesity Society.

  11. Malnutrition: a marker for increased complications, mortality, and length of stay after total shoulder arthroplasty.

    PubMed

    Garcia, Grant H; Fu, Michael C; Dines, David M; Craig, Edward V; Gulotta, Lawrence V

    2016-02-01

    Malnutrition is an established risk factor for postoperative complications. The purpose of this investigation was to determine the overall prevalence of malnutrition in total shoulder arthroplasty (TSA) patients, the differences in prevalence across obesity subgroups, and the overall complication risk of malnourished patients compared with normal patients. The American College of Surgeons National Surgical Quality Improvement Program database was queried for TSA cases from 2005 to 2013 for this retrospective cohort study. Malnutrition was defined as preoperative albumin concentration of <3.5 g/dL. Rates of postoperative complications were compared between normal and malnourished patients. We identified 4,655 TSA cases, with preoperative albumin measurements available for 1681 patients (36.1%). Propensity score adjustment successfully reduced selection bias, with adjusted P values of >.05 for demographics, body mass index, and modified Charlson Comorbidity Index. Of the cohort with albumin measurements, 7.6% of patients were malnourished according to our criteria. Bivariate analysis showed malnourished patients had higher rates of pulmonary complications, anemia requiring transfusion, extended length of stay (LOS), and death (all P < .05). Propensity-adjusted multivariable logistic regression demonstrated that malnutrition was significantly associated (all P < .05) with postoperative transfusion (odds ratio, 2.49), extended LOS (odds ratio, 1.69), and death (odds ratio, 18.09). The overall prevalence of malnutrition was 7.6%. Malnourished patients were at a significantly increased risk for blood transfusion, longer hospital LOS, and death within 30 days of surgery. Multivariable analysis showed TSA patients with preoperative albumin levels of <3.5 g/dL are at much higher risk for morbidity and death after surgery than patients with albumin levels within normal reference ranges. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  12. Higher Ratio of Serum Alpha-Fetoprotein Could Predict Outcomes in Patients with Hepatitis B Virus-Associated Hepatocellular Carcinoma and Normal Alanine Aminotransferase

    PubMed Central

    Park, Joong-Won

    2016-01-01

    Background The role of serum alpha-fetoprotein (AFP) levels in the surveillance and diagnosis of hepatocellular carcinoma (HCC) is controversial. The aim of this study was to investigate the value of serially measured serum AFP levels in HCC progression or recurrence after initial treatment. Methods A total of 722 consecutive patients newly diagnosed with HCC and treated at the National Cancer Center, Korea, between January 2004 and December 2009 were enrolled. The AFP ratios between 4–8 weeks post-treatment and those at the time of HCC progression or recurrence were obtained. Multivariate logistic regression analysis was performed to correlate the post-treatment AFP ratios with the presence of HCC progression or recurrence. Results The etiology of HCC was related to chronic hepatitis B virus (HBV) infection in 562 patients (77.8%), chronic hepatitis C virus (HCV) infection in 74 (10.2%), and non-viral cause in 86 (11.9%). There was a significant decrease in serum AFP levels from the baseline to 4 to 8 weeks after treatment (median AFP, 319.6 ng/mL vs. 49.6 ng/mL; p< 0.001). Multivariate analysis showed that an AFP ratio > 1.0 was an independently associated with HCC progression or recurrence. Among the different causes of HCC analyzed, this association was significant only for HCC related to chronic hepatitis B (p< 0.001) and non-viral causes (p<0.05), and limited only to patients who had normal alanine aminotransferase (ALT) levels. Conclusion Serial measurements of serum AFP ratios could be helpful in detecting progression or recurrence in treated patients with HBV-HCC and normal ALT. PMID:27304617

  13. Methylation of tissue factor pathway inhibitor 2 as a prognostic biomarker for hepatocellular carcinoma after hepatectomy.

    PubMed

    Sun, Feng-Kai; Sun, Qi; Fan, Yu-Chen; Gao, Shuai; Zhao, Jing; Li, Feng; Jia, Yi-Bin; Liu, Chuan; Wang, Li-Yuan; Li, Xin-You; Ji, Xiang-Fen; Wang, Kai

    2016-02-01

    Methylation of tissue factor pathway inhibitor 2 (TFPI2) gene has been detected in hepatocellular carcinoma (HCC). However, the clinicopathologcial significance and prognostic value of TFPI2 methylation in HCC remains largely unknown. This study aimed to investigate the prognostic value of TFPI2 methylation in HCC after hepatectomy. Methylation status of TFPI2 gene was examined in 178 surgical specimens of HCC and 20 normal liver samples using methylation-specific polymerase chain reaction. Methylation of TFPI2 gene was detected in 44.9% (80 of 178) of primary HCC samples, 10.7% (19 of 178) of the corresponding non-tumorous liver samples, and 5.0% (1/20) of the normal liver samples. The mRNA concentrations of TFPI2 in primary HCC tissues were significantly lower than those in corresponding non-tumorous liver tissues and those in normal liver tissues. TFPI2 methylation was significantly associated with higher TNM stage. Patients with TFPI2 methylation demonstrated a significantly poorer prognosis than those without TFPI2 methylation for both overall survival and disease-free survival (P < 0.001, respectively). Multivariate analyses confirmed that TFPI2 methylation was an independent prognostic factor for both overall survival (P = 0.002) and disease-free survival (P = 0.000) in HCC after hepatectomy. Moreover, TFPI2 methylation was found to be the only independent predictor for early tumor recurrence of HCC after resection based on multivariate analysis (P = 0.002). Methylation of TFPI2 predicts high risk of advanced tumor stage, early tumor recurrence, and poor prognosis, and it could be a potential prognostic biomarker in patients with HCC after hepatectomy. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  14. Pretransplantation Cystatin C, but not Creatinine, Predicts 30-day Cardiovascular Events and Mortality in Liver Transplant Recipients With Normal Serum Creatinine Levels.

    PubMed

    Kwon, H-M; Moon, Y-J; Jung, K-W; Jun, I-G; Song, J-G; Hwang, G-S

    2018-05-01

    The connection between renal dysfunction and cardiovascular dysfunction has been consistently shown. In patients with liver cirrhosis, renal dysfunction shows a tight correlation with prognosis after liver transplantation (LT); therefore, precise renal assessment is mandatory. Cystatin C, a sensitive biomarker for assessing renal function, has shown superiority in detecting mild renal dysfunction compared to classical biomarker creatinine. In this study, we aimed to compare cystatin C and creatinine in predicting 30-day major cardiovascular events (MACE) and all-cause mortality in LT recipients with normal serum creatinine levels. Between May 2010 and October 2015, 1181 LT recipients (mean Model for End-stage Liver Disease score 12.1) with pretransplantation creatinine level ≤1.4 mg/dL were divided into tertiles according to each renal biomarker. The 30-day MACE was a composite of troponin I >0.2 ng/mL, arrhythmia, congestive heart failure, death, and cerebrovascular events. The highest tertile of cystatin C (≥0.95 mg/L) was associated with a higher risk for a 30-day MACE event (odds ratio: 1.62; 95% confidence interval: 1.07 to 2.48) and higher risk of death (hazard ratio: 1.96; 95% confidence interval: 1.04 to 3.67) than the lowest tertile (<0.74 mg/L) after multivariate adjustments. However, the highest tertile of creatinine level showed neither increasing MACE event rate nor worse survival rate compared with the lowest tertile (both insignificant after multivariate adjustment). Pretransplantation cystatin C is superior in risk prediction of MACE and all-cause mortality in LT recipients with normal creatinine, compared to creatinine. It would assist further risk stratification which may not be detected with creatinine. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Health-related quality-of-life parameters as independent prognostic factors in advanced or metastatic bladder cancer.

    PubMed

    Roychowdhury, D F; Hayden, A; Liepa, A M

    2003-02-15

    This retrospective analysis examined prognostic significance of health-related quality-of-life (HRQoL) parameters combined with baseline clinical factors on outcomes (overall survival, time to progressive disease, and time to treatment failure) in bladder cancer. Outcome and HRQoL (European Organization for Research and Treatment of Cancer Quality of Life Questionnaire C30) data were collected prospectively in a phase III study assessing gemcitabine and cisplatin versus methotrexate, vinblastine, doxorubicin, and cisplatin in locally advanced or metastatic bladder cancer. Prespecified baseline clinical factors (performance status, tumor-node-metastasis staging, visceral metastases [VM], alkaline phosphatase [AP] level, number of metastatic sites, prior radiotherapy, disease measurability, sex, time from diagnosis, and sites of disease) and selected HRQoL parameters (global QoL; all functional scales; symptoms: pain, fatigue, insomnia, dyspnea, anorexia) were evaluated using Cox's proportional hazards model. Factors with individual prognostic value (P <.05) on outcomes in univariate models were assessed for joint prognostic value in a multivariate model. A final model was developed using a backward selection strategy. Patients with baseline HRQoL were included (364 of 405, 90%). The final model predicted longer survival with low/normal AP levels, no VM, high physical functioning, low role functioning, and no anorexia. Positive prognostic factors for time to progressive disease were good performance status, low/normal AP levels, no VM, and minimal fatigue; for time to treatment failure, they were low/normal AP levels, minimal fatigue, and no anorexia. Global QoL was a significant predictor of outcome in univariate analyses but was not retained in the multivariate model. HRQoL parameters are independent prognostic factors for outcome in advanced bladder cancer; their prognostic importance needs further evaluation.

  16. [Predictive value of pre-treatment hypoalbuminemia in prognosis of resected colorectal cancer].

    PubMed

    Borda, Fernando; Borda, Ana; Jiménez, Javier; Zozaya, José Manuel; Prieto, Carlos; Gómez, Marta; Urman, Jesús; Ibáñez, Berta

    2014-05-01

    Albuminemia is part of the antitumoral systemic inflammatory response. We therefore analyzed its possible value in establishing the preoperative prognosis of colorectal carcinoma (CRC). We conducted a retrospective, observational study of a series of consecutive patients who underwent CRC resection. Univariate and multivariate analyses of survival curves were performed in patients with and without pre-treatment hypoalbuminemia (<3.5g/dl), both in the overall group of patients and in the subgroup of those with pTNM stage ii tumors. In addition, we compared the 5-year tumor-related mortality in patients with and without hypoalbuminemia. A total of 207 patients were reviewed (median follow-up: 81 months). In the overall multivariate analysis, survival curves were better in patients with normal albumin levels than in those with hypoalbuminemia (HR=2.82; CI 95%=[1.54-5.19]; P=.001). This better prognostic value of normal albumin levels was also significant in pTNM stage ii tumors: (HR=3.76; CI 95%=[1.40-10.08]; P=.009). The 5-year mortality index was lower in patients with normal albumin levels: overall series=18.8% vs 42.9% (OR=3.24; CI 95%=[1.48-7.12]; p=0.001); pTNM stage ii=13.3% vs 44.4% (OR=5.2; CI 95%=[1.36-20.34]; P=0.004). Pre-treatment hypoalbuminemia (<3.5g/dl) was independently related to shorter survival after tumor resection, both in the overall series of patients and in pTNM stage ii CRC. If these results are confirmed, hypoalbuminemia would be a simple and significant marker of poor prognosis, available at the initial diagnosis. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.

  17. Coping with matrix effects in headspace solid phase microextraction gas chromatography using multivariate calibration strategies.

    PubMed

    Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana

    2015-08-14

    SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Comparative evaluation of spectroscopic models using different multivariate statistical tools in a multicancer scenario

    NASA Astrophysics Data System (ADS)

    Ghanate, A. D.; Kothiwale, S.; Singh, S. P.; Bertrand, Dominique; Krishna, C. Murali

    2011-02-01

    Cancer is now recognized as one of the major causes of morbidity and mortality. Histopathological diagnosis, the gold standard, is shown to be subjective, time consuming, prone to interobserver disagreement, and often fails to predict prognosis. Optical spectroscopic methods are being contemplated as adjuncts or alternatives to conventional cancer diagnostics. The most important aspect of these approaches is their objectivity, and multivariate statistical tools play a major role in realizing it. However, rigorous evaluation of the robustness of spectral models is a prerequisite. The utility of Raman spectroscopy in the diagnosis of cancers has been well established. Until now, the specificity and applicability of spectral models have been evaluated for specific cancer types. In this study, we have evaluated the utility of spectroscopic models representing normal and malignant tissues of the breast, cervix, colon, larynx, and oral cavity in a broader perspective, using different multivariate tests. The limit test, which was used in our earlier study, gave high sensitivity but suffered from poor specificity. The performance of other methods such as factorial discriminant analysis and partial least square discriminant analysis are at par with more complex nonlinear methods such as decision trees, but they provide very little information about the classification model. This comparative study thus demonstrates not just the efficacy of Raman spectroscopic models but also the applicability and limitations of different multivariate tools for discrimination under complex conditions such as the multicancer scenario.

  19. Accelerated Fermentation of Brewer's Wort by Saccharomyces carlsbergensis1

    PubMed Central

    Porter, Sookie C.

    1975-01-01

    A rapid procedure for wort fermentation with Saccharomyces carlsbergensis at 12 C is described. Fermentation time was reduced from 7 to 4 days with normal inoculum by shaking. Increasing the inoculation to 5 to 10 times normal and shaking resulted in complete fermentation in 3 days. Maximum yeast population was reached rapidly with the large inocula, but fermentation proceeded at approximately the same rate when inoculations in excess of four times the normal were used. Similar results were obtained with both small-scale (100 ml) and microbrew (2.4 liters) fermentations. PMID:16350046

  20. Child Behavior Research. A Survey of British Research Into Child Psychiatric Disorder and Normal Social Development. A Report to the MRC Child Psychiatry Sub-Committee.

    ERIC Educational Resources Information Center

    Shaffer, D., Comp.

    Approximately 250 abstracts of currently active (1975-1976) British research into child psychiatric disorder and normal social development are presented. It is explained that the information was gathered from a 1974 survey of research and education organizations, child psychiatrists at medical schools, and the heads of academic departments of…

  1. The Applicability of Nonlinear Systems Dynamics Chaos Measures to Cardiovascular Physiology Variables

    NASA Technical Reports Server (NTRS)

    Hooker, John C.

    1991-01-01

    Three measures of nonlinear chaos (fractal dimension, Approximate Entropy (ApEn), and Lyapunov exponents) were studied as potential measures of cardiovascular condition. It is suggested that these measures have potential in the assessment of cardiovascular condition in environments of normal cardiovascular stress (normal gravity on the Earth surface), cardiovascular deconditioning (microgravity of space), and increased cardiovascular stress (lower body negative pressure (LBNP) treatments).

  2. 76 FR 69720 - Don Pedro Hydro, LLC; Moccasin Pumped Storage, LLC; Notice of Competing Preliminary Permit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... storage capacity of 25,000 acre-feet and a surface area of 241 acres at maximum normal water surface... penstocks; (4) a powerhouse with four 250 MW pump/turbines having an installed capacity of approximately... capacity of 25,000 acre-feet and a surface area of 240 acres at maximum normal water surface elevation of 1...

  3. [Near infrared spectroscopy based process trajectory technology and its application in monitoring and controlling of traditional Chinese medicine manufacturing process].

    PubMed

    Li, Wen-Long; Qu, Hai-Bin

    2016-10-01

    In this paper, the principle of NIRS (near infrared spectroscopy)-based process trajectory technology was introduced.The main steps of the technique include:① in-line collection of the processes spectra of different technics; ② unfolding of the 3-D process spectra;③ determination of the process trajectories and their normal limits;④ monitoring of the new batches with the established MSPC (multivariate statistical process control) models.Applications of the technology in the chemical and biological medicines were reviewed briefly. By a comprehensive introduction of our feasibility research on the monitoring of traditional Chinese medicine technical process using NIRS-based multivariate process trajectories, several important problems of the practical applications which need urgent solutions are proposed, and also the application prospect of the NIRS-based process trajectory technology is fully discussed and put forward in the end. Copyright© by the Chinese Pharmaceutical Association.

  4. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  5. Discrimination of inflammatory bowel disease using Raman spectroscopy and linear discriminant analysis methods

    NASA Astrophysics Data System (ADS)

    Ding, Hao; Cao, Ming; DuPont, Andrew W.; Scott, Larry D.; Guha, Sushovan; Singhal, Shashideep; Younes, Mamoun; Pence, Isaac; Herline, Alan; Schwartz, David; Xu, Hua; Mahadevan-Jansen, Anita; Bi, Xiaohong

    2016-03-01

    Inflammatory bowel disease (IBD) is an idiopathic disease that is typically characterized by chronic inflammation of the gastrointestinal tract. Recently much effort has been devoted to the development of novel diagnostic tools that can assist physicians for fast, accurate, and automated diagnosis of the disease. Previous research based on Raman spectroscopy has shown promising results in differentiating IBD patients from normal screening cases. In the current study, we examined IBD patients in vivo through a colonoscope-coupled Raman system. Optical diagnosis for IBD discrimination was conducted based on full-range spectra using multivariate statistical methods. Further, we incorporated several feature selection methods in machine learning into the classification model. The diagnostic performance for disease differentiation was significantly improved after feature selection. Our results showed that improved IBD diagnosis can be achieved using Raman spectroscopy in combination with multivariate analysis and feature selection.

  6. Method for enhanced accuracy in predicting peptides using liquid separations or chromatography

    DOEpatents

    Kangas, Lars J.; Auberry, Kenneth J.; Anderson, Gordon A.; Smith, Richard D.

    2006-11-14

    A method for predicting the elution time of a peptide in chromatographic and electrophoretic separations by first providing a data set of known elution times of known peptides, then creating a plurality of vectors, each vector having a plurality of dimensions, and each dimension representing the elution time of amino acids present in each of these known peptides from the data set. The elution time of any protein is then be predicted by first creating a vector by assigning dimensional values for the elution time of amino acids of at least one hypothetical peptide and then calculating a predicted elution time for the vector by performing a multivariate regression of the dimensional values of the hypothetical peptide using the dimensional values of the known peptides. Preferably, the multivariate regression is accomplished by the use of an artificial neural network and the elution times are first normalized using a transfer function.

  7. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  8. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  9. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  10. Sexual satisfaction and sexual health among university students in the United States.

    PubMed

    Higgins, Jenny A; Mullinax, Margo; Trussell, James; Davidson, J Kenneth; Moore, Nelwyn B

    2011-09-01

    Despite the World Health Organization's definition of sexual health as a state of well-being, virtually no public health research has examined sexual well-being outcomes, including sexual satisfaction. Emerging evidence suggests that sexual well-being indicators are associated with more classic measures of healthy sexual behaviors. We surveyed 2168 university students in the United States and asked them to rate their physiological and psychological satisfaction with their current sexual lives. Many respondents reported that they were either satisfied (approximately half) or very satisfied (approximately one third). In multivariate analyses, significant (P < .05) correlates of both physiological and psychological satisfaction included sexual guilt, sexual self-comfort, self-esteem (especially among men), relationship status, and sexual frequency. To enhance sexual well-being, public health practitioners should work to improve sexual self-comfort, alleviate sexual guilt, and promote longer term relationships.

  11. Telomere erosion varies during in vitro aging of normal human fibroblasts from young and adult donors.

    PubMed

    Figueroa, R; Lindenmaier, H; Hergenhahn, M; Nielsen, K V; Boukamp, P

    2000-06-01

    The life span of normal fibroblasts in vitro (Hayflick limit) depends on donor age, and telomere shortening has been proposed as a potential mechanism. By quantitative fluorescence in situ hybridization and Southern blot analysis, we show progressive telomere loss to about 5 kb mean telomere restriction fragment length in fibroblasts from two adult donors within 40 population doublings, whereas in fibroblasts from two infant donors, telomere erosion is reduced, leaving a mean telomere restriction fragment length of approximately 7 kb at senescence (after approximately 60 population doublings). Aging of fibroblasts from both infant and adult donors was not accompanied by chromosomal abnormalities but was correlated with increased telomere repeat-binding factor 2 expression at both the protein and transcriptional level.

  12. Few-cycle pulse generation in an x-ray free-electron laser.

    PubMed

    Dunning, D J; McNeil, B W J; Thompson, N R

    2013-03-08

    A method is proposed to generate trains of few-cycle x-ray pulses from a free-electron laser (FEL) amplifier via a compact "afterburner" extension consisting of several few-period undulator sections separated by electron chicane delays. Simulations show that in the hard x ray (wavelength ~0.1 nm; photon energy ~10 keV) and with peak powers approaching normal FEL saturation (GW) levels, root mean square pulse durations of 700 zs may be obtained. This is approximately two orders of magnitude shorter than that possible for normal FEL amplifier operation. The spectrum is discretely multichromatic with a bandwidth envelope increased by approximately 2 orders of magnitude over unseeded FEL amplifier operation. Such a source would significantly enhance research opportunity in atomic dynamics and push capability toward nuclear dynamics.

  13. The Anomalous Change in the QBO in 2015-2016

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Coy, L.; Lait, L. R.

    2016-01-01

    The quasi-biennial oscillation (QBO) is a tropical lower stratospheric, downward propagating zonal wind variation, with an average period of approximately 28 months. The QBO has been constantly documented since 1953. Here we describe the evolution of the QBO during the Northern Hemisphere winter of 2015-16 using radiosonde observations and meteorological reanalyses. Normally, the QBO would show a steady downward propagation of the westerly phase. In 2015-16, there was an anomalous upward displacement of this westerly phase from approximately30 hPa to 15 hPa. These westerlies impinge on, or “cut-off” the normal downward propagation of the easterly phase. In addition, easterly winds develop at 40 hPa. Comparisons to tropical wind statistics for the 1953-present record demonstrate that this 2015-16 QBO disruption is unprecedented.

  14. Mild clinical involvement in two males with a large FMR1 premutation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagerman, R.; O`Connor, R.; Staley, L.

    1994-09-01

    Both male and female individuals who carry the FMR1 premutation are considered to be clinically unaffected and have been reported to have normal transcription of their FMR1 gene and normal FMR1 protein (FMRP) production. We have evaluated two males who are mildly affected clinically with features of fragile X syndrome and demonstrate a large premutation on DNA studies. The first patient is a 2 year 8 month old boy who demonstrated the fragile X chromosome in 3% of his lymphocytes on cytogenetic testing. His physical features include mildly prominent ears and hyperextensible finger joints. He has language delays along withmore » behavioral problems including tantrums and attention deficit. Developmental testing revealed a mental scale of 116 on the Bayley Scales of Infant Development, which is in the normal range. DNA testing demonstrated a premutation with 161 CGG repeats. This premutation was methylated in a small percent of his cells (<2%). These findings were observed in both blood leukocytes and buccal cells. Protein studies of transformed lymphocytes from this boy showed approximately 50 to 70% of the normal level of FMRP. The second patient is a 14 year old male who was cytogenetically negative for fragile X expression. His physical exam demonstrates a long face, a high palate and macroorchidism, (testicular volume of approximately 35 ml). His overall full scale IQ on the WISC-III is 73. He has language deficits and visual spatial perceptual deficits which have caused significant learning problems in school. Behaviorally he has problems with shyness and social anxiety, although he does not have attention deficit hyperactivity disorder. DNA testing revealed an FMR1 mutation of approximately 210 CGG repeats that is methylated in 4.7% of his cells.« less

  15. Cooling and Trapping of Neutral Atoms

    DTIC Science & Technology

    2009-04-30

    Schrodinger equation in which the absence of the rotating wave approximation accounts for the two frequencies [18]. This result can be described in...depict this energy conservation process is the Jaynes - Cummings view, where the light field can be described as a number state. Then it becomes clear...of the problem under consideration. Find a suitable approximation for the normal modes; the simpler, the better. Decide how to model the light

  16. Improving the chi-squared approximation for bivariate normal tolerance regions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.

    1993-01-01

    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.

  17. Approximate symmetries of Hamiltonians

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Flammia, Steven T.

    2017-08-01

    We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.

  18. Survey of Cyber Moving Target Techniques

    DTIC Science & Technology

    2013-09-25

    Description: Details: The authors propose a very simple form of multivariant execution with two replicas where one replica runs with the stack growing ...upwards and the other runs with the stack growing down. Normally any single architecture only supports the stack growing in one direction, but the...April 2012. 8. “The NX Bit and ASLR,” Tom’s Hardware, 25 March 2009. 9. “Pwn2Own day 2: iPhone, BlackBerry beaten; Chrome, Firefox no-shows,” Ars

  19. An Application of Discriminant Analysis to the Selection of Software Cost Estimating Models.

    DTIC Science & Technology

    1984-09-01

    the PRICE S Users Manual (29:111-25) was used with a slight modification. Based on the experience and advice of Captain Joe Dean, Electronic System...this study, and EXP is the expansion factor listed in the PRICE S User’s Manual . Another important factor needing explanation is development cost...coefficients and a unique constant. According to the SPSS manual (26:445) "Under the assumption of a multivariate normal distribution, the

  20. Multivariate classification of the infrared spectra of cell and tissue samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haaland, D.M.; Jones, H.D.; Thomas, E.V.

    1997-03-01

    Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less

Top