Sample records for classical regression analysis

  1. Classical Statistics and Statistical Learning in Imaging Neuroscience

    PubMed Central

    Bzdok, Danilo

    2017-01-01

    Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896

  2. Quantifying the statistical importance of utilizing regression over classic energy intensity calculations for tracking efficiency improvements in industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei

    In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less

  3. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  4. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  5. Semi-automatic assessment of skin capillary density: proof of principle and validation.

    PubMed

    Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M

    2013-11-01

    Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.

  6. Buying a Better Air Force

    DTIC Science & Technology

    2006-03-01

    identify if an explanatory variable may have been omitted due to model misspecification ( Ramsey , 1979). The RESET test resulted in failure to...Prob > F 0.0094 This model was also regressed using Huber-White estimators. Again, the Ramsey RESET test was done to ensure relevant...Aircraft. Annapolis, MD: Naval Institute Press, 2004. Ramsey , J. B. “ Tests for Specification Errors in Classical Least-Squares Regression Analysis

  7. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  8. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  9. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  10. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  11. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  12. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  13. Assessing the Liquidity of Firms: Robust Neural Network Regression as an Alternative to the Current Ratio

    NASA Astrophysics Data System (ADS)

    de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia

    Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.

  14. Do classic blood biomarkers of JSLE identify active lupus nephritis? Evidence from the UK JSLE Cohort Study.

    PubMed

    Smith, E M D; Jorgensen, A L; Beresford, M W

    2017-10-01

    Background Lupus nephritis (LN) affects up to 80% of juvenile-onset systemic lupus erythematosus (JSLE) patients. The value of commonly available biomarkers, such as anti-dsDNA antibodies, complement (C3/C4), ESR and full blood count parameters in the identification of active LN remains uncertain. Methods Participants from the UK JSLE Cohort Study, aged <16 years at diagnosis, were categorized as having active or inactive LN according to the renal domain of the British Isles Lupus Assessment Group score. Classic biomarkers: anti-dsDNA, C3, C4, ESR, CRP, haemoglobin, total white cells, neutrophils, lymphocytes, platelets and immunoglobulins were assessed for their ability to identify active LN using binary logistic regression modeling, with stepAIC function applied to select a final model. Receiver-operating curve analysis was used to assess diagnostic accuracy. Results A total of 370 patients were recruited; 191 (52%) had active LN and 179 (48%) had inactive LN. Binary logistic regression modeling demonstrated a combination of ESR, C3, white cell count, neutrophils, lymphocytes and IgG to be best for the identification of active LN (area under the curve 0.724). Conclusions At best, combining common classic blood biomarkers of lupus activity using multivariate analysis provides a 'fair' ability to identify active LN. Urine biomarkers were not included in these analyses. These results add to the concern that classic blood biomarkers are limited in monitoring discrete JSLE manifestations such as LN.

  15. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  16. A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences

    PubMed Central

    Feingold, Alan

    2013-01-01

    The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615

  17. Nonlinear multivariate and time series analysis by neural network methods

    NASA Astrophysics Data System (ADS)

    Hsieh, William W.

    2004-03-01

    Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.

  18. Estimating Causal Effects in Mediation Analysis Using Propensity Scores

    ERIC Educational Resources Information Center

    Coffman, Donna L.

    2011-01-01

    Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…

  19. Functional mixture regression.

    PubMed

    Yao, Fang; Fu, Yuejiao; Lee, Thomas C M

    2011-04-01

    In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.

  20. Revisiting the Relationship between Marketing Education and Marketing Career Success

    ERIC Educational Resources Information Center

    Bacon, Donald R.

    2017-01-01

    In a replication of a classic article by Hunt, Chonko, and Wood, regression analysis was conducted using data from a sample of 864 marketing professionals. In contrast to Hunt, Chonko, and Wood, an undergraduate degree in marketing was positively related to income in marketing jobs, but surprisingly, respondents with some nonmarketing majors…

  1. Aggregating the response in time series regression models, applied to weather-related cardiovascular mortality

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.

    2018-07-01

    In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship.

  2. Numerical scoring for the Classic BILAG index.

    PubMed

    Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline

    2009-12-01

    To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.

  3. Numerical scoring for the Classic BILAG index

    PubMed Central

    Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.

    2009-01-01

    Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027

  4. Association between Stereotactic Radiotherapy and Death from Brain Metastases of Epithelial Ovarian Cancer: a Gliwice Data Re-Analysis with Penalization

    PubMed

    Tukiendorf, Andrzej; Mansournia, Mohammad Ali; Wydmański, Jerzy; Wolny-Rokicka, Edyta

    2017-04-01

    Background: Clinical datasets for epithelial ovarian cancer brain metastatic patients are usually small in size. When adequate case numbers are lacking, resulting estimates of regression coefficients may demonstrate bias. One of the direct approaches to reduce such sparse-data bias is based on penalized estimation. Methods: A re- analysis of formerly reported hazard ratios in diagnosed patients was performed using penalized Cox regression with a popular SAS package providing additional software codes for a statistical computational procedure. Results: It was found that the penalized approach can readily diminish sparse data artefacts and radically reduce the magnitude of estimated regression coefficients. Conclusions: It was confirmed that classical statistical approaches may exaggerate regression estimates or distort study interpretations and conclusions. The results support the thesis that penalization via weak informative priors and data augmentation are the safest approaches to shrink sparse data artefacts frequently occurring in epidemiological research. Creative Commons Attribution License

  5. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  6. Integrated analysis of DNA-methylation and gene expression using high-dimensional penalized regression: a cohort study on bone mineral density in postmenopausal women.

    PubMed

    Lien, Tonje G; Borgan, Ørnulf; Reppe, Sjur; Gautvik, Kaare; Glad, Ingrid Kristine

    2018-03-07

    Using high-dimensional penalized regression we studied genome-wide DNA-methylation in bone biopsies of 80 postmenopausal women in relation to their bone mineral density (BMD). The women showed BMD varying from severely osteoporotic to normal. Global gene expression data from the same individuals was available, and since DNA-methylation often affects gene expression, the overall aim of this paper was to include both of these omics data sets into an integrated analysis. The classical penalized regression uses one penalty, but we incorporated individual penalties for each of the DNA-methylation sites. These individual penalties were guided by the strength of association between DNA-methylations and gene transcript levels. DNA-methylations that were highly associated to one or more transcripts got lower penalties and were therefore favored compared to DNA-methylations showing less association to expression. Because of the complex pathways and interactions among genes, we investigated both the association between DNA-methylations and their corresponding cis gene, as well as the association between DNA-methylations and trans-located genes. Two integrating penalized methods were used: first, an adaptive group-regularized ridge regression, and secondly, variable selection was performed through a modified version of the weighted lasso. When information from gene expressions was integrated, predictive performance was considerably improved, in terms of predictive mean square error, compared to classical penalized regression without data integration. We found a 14.7% improvement in the ridge regression case and a 17% improvement for the lasso case. Our version of the weighted lasso with data integration found a list of 22 interesting methylation sites. Several corresponded to genes that are known to be important in bone formation. Using BMD as response and these 22 methylation sites as covariates, least square regression analyses resulted in R 2 =0.726, comparable to an average R 2 =0.438 for 10000 randomly selected groups of DNA-methylations with group size 22. Two recent types of penalized regression methods were adapted to integrate DNA-methylation and their association to gene expression in the analysis of bone mineral density. In both cases predictions clearly benefit from including the additional information on gene expressions.

  7. Aggregating the response in time series regression models, applied to weather-related cardiovascular mortality.

    PubMed

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B M J

    2018-07-01

    In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship. Copyright © 2018. Published by Elsevier B.V.

  8. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  9. Glomerular structural-functional relationship models of diabetic nephropathy are robust in type 1 diabetic patients.

    PubMed

    Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad

    2015-06-01

    Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. Public and Private School Distinction, Regional Development Differences, and Other Factors Influencing the Success of Primary School Students in Turkey

    ERIC Educational Resources Information Center

    Sulku, Seher Nur; Abdioglu, Zehra

    2015-01-01

    This study investigates the factors influencing the success of students in primary schools in Turkey. TIMSS 2011 data for Turkey, measuring the success of eighth-grade students in the field of mathematics, were used in an econometric analysis, performed using classical linear regression models. Two hundred thirty-nine schools participated in the…

  11. Wilson's Disease: a challenge of diagnosis. The 5-year experience of a tertiary centre.

    PubMed

    Gheorghe, Liana; Popescu, Irinel; Iacob, Speranta; Gheorghe, Cristian; Vaidan, Roxana; Constantinescu, Alexandra; Iacob, Razvan; Becheanu, Gabriel; Angelescu, Corina; Diculescu, Mircea

    2004-09-01

    Because molecular diagnosis is considered impractical and no patognomonic features have been described, diagnosis of Wilson's disease (WD) using clinical and biochemical findings is still challenging. We analysed predictive factors for the diagnosis in 55 patients with WD diagnosed in our centre between 1st January 1999 and 1st April 2004. All patients presented predominant liver disease classified as: 1) asymptomatic, found incidentally, 2) chronic hepatitis or cirrhosis, or 3) fulminant hepatic failure. Diagnosis was considered as classic (two out of the three following criteria: 1) serum ceruloplasmin < 20 mg/dl, 2) the presence of Kayser-Fleischer rings and/or 3) hepatic copper > 250 mg/g dry weight liver tissue), and non-classic (clinical manifestations plus laboratory parameters suggesting impaired copper metabolism). The association between the predictive factors and non-classic diagnosis was assessed based on the level of statistical significance (p value<0.05) associated with the chi-squared test in contingency tables. Multivariate analysis was performed by logistic regression using SPSS 10. There were 31 males (56.3%) and 24 females (43.7%) with the mean age at diagnosis of 20.92 +/- 9.97 years (4-52 years); 51 patients (92.7%) were younger than 40 years. Asymptomatic WD was diagnosed in 14 patients (25.4%), chronic liver disease due to WD in 29 patients (52.8%) and fulminant hepatic failure in 12 patients (21.8%). The classic diagnosis was made in 32 patients (58.18%). In the univariate analysis the non-classic diagnosis was associated with: age>18 years (p=0.03), increased copper excretion (p<0.0001), Coombs-negative hemolysis (p=0.03), absence of neurological manifestations (p<0.0001). Multivariate analysis identified age over 18 years, increased urinary copper, and isolated hepatic involvement as independent predictors. In clinical practice, WD should be considered also in patients who do not fulfil classic criteria. Independent factors associated with non-classic diagnosis were age over 18 years, increased cupruresis and isolated liver disease.

  12. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  13. Spectral distance decay: Assessing species beta-diversity by quantile regression

    USGS Publications Warehouse

    Rocchinl, D.; Nagendra, H.; Ghate, R.; Cade, B.S.

    2009-01-01

    Remotely sensed data represents key information for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance may allow us to quantitatively estimate how beta-diversity in species changes with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological datasets are characterized by a high number of zeroes that can add noise to the regression model. Quantile regression can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this paper, we used ordinary least square (ols) and quantile regression to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.05) considering both ols and quantile regression. Nonetheless, ols regression estimate of mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when spectral distance approaches zero, was very low compared with the intercepts of upper quantiles, which detected high species similarity when habitats are more similar. In this paper we demonstrated the power of using quantile regressions applied to spectral distance decay in order to reveal species diversity patterns otherwise lost or underestimated by ordinary least square regression. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  14. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    PubMed

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  15. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  16. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    NASA Astrophysics Data System (ADS)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  17. Risk factors for classical hysterotomy by gestational age.

    PubMed

    Osmundson, Sarah S; Garabedian, Matthew J; Lyell, Deirdre J

    2013-10-01

    To examine the likelihood of classical hysterotomy across preterm gestational ages and to identify factors that increase its occurrence. This is a secondary analysis of a prospective observational cohort collected by the Maternal-Fetal Medicine Network of all women with singleton gestations who underwent a cesarean delivery with a known hysterotomy. Comparisons were made based on gestational age. Factors thought to influence hysterotomy type were studied, including maternal age, body mass index, parity, birth weight, small for gestational age (SGA) status, fetal presentation, labor preceding delivery, and emergent delivery. Approximately 36,000 women were eligible for analysis, of whom 34,454 (95.7%) underwent low transverse hysterotomy and 1,562 (4.3%) underwent classical hysterotomy. The median gestational age of women undergoing a classical hysterotomy was 32 weeks and the incidence peaked between 24 0/7 weeks and 25 6/7 weeks (53.2%), declining with each additional week of gestation thereafter (P for trend <.001). In multivariable regression, the likelihood of classical hysterotomy was increased with SGA (n=258; odds ratio [OR] 2.71; confidence interval [CI] 1.78-4.13), birth weight 1,000 g or less (n=467; OR 1.51; CI 1.03-2.24), and noncephalic presentation (n=783; OR 2.03; CI 1.52-2.72). The likelihood of classical hysterotomy was decreased between 23 0/7 and 27 6/7 weeks of gestation and after 32 weeks of gestation when labor preceded delivery, and increased between 28 0/7 and 31 6/7 weeks of gestation and after 32 weeks of gestation by multiparity and previous cesarean delivery. Emergent delivery did not predict classical hysterotomy. Fifty percent of women at 23-26 weeks of gestation who undergo cesarean delivery have a classical hysterotomy, and the risk declines steadily thereafter. This likelihood is increased by fetal factors, especially SGA and noncephalic presentation. : II.

  18. Clinical evaluation of a novel population-based regression analysis for detecting glaucomatous visual field progression.

    PubMed

    Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C

    2011-04-01

    The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF clusters. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Comparing nouns and verbs in a lexical task.

    PubMed

    Cordier, Françoise; Croizet, Jean-Claude; Rigalleau, François

    2013-02-01

    We analyzed the differential processing of nouns and verbs in a lexical decision task. Moderate and high-frequency nouns and verbs were compared. The characteristics of our material were specified at the formal level (number of letters and syllables, number of homographs, orthographic neighbors, frequency and age of acquisition), and at the semantic level (imagery, number and strength of associations, number of meanings, context dependency). A regression analysis indicated a classical frequency effect and a word-type effect, with latencies for verbs being slower than for nouns. The regression analysis did not permit the conclusion that semantic effects were involved (particularly imageability). Nevertheless, the semantic opposition between nouns as prototypical representations of objects, and verbs as prototypical representation of actions was not tested in this experiment and remains a good candidate explanation of the response time discrepancies between verbs and nouns.

  20. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  1. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least…

  2. Modified locally weighted--partial least squares regression improving clinical predictions from infrared spectra of human serum samples.

    PubMed

    Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la

    2013-03-30

    Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Breaking the solid ground of common sense: undoing "structure" with Michael Balint.

    PubMed

    Bonomi, Carlo

    2003-09-01

    Balint's great merit was to question what, in the classical perspective, was assumed as a prerequisite for analysis and thus located beyond analysis: the maturity of the ego. A fundamental premise of his work was Ferenczi's distrust for the structural model, which praised the maturity of the ego and its verbal, social, and adaptive abilities. Ferenczi's view of ego maturation as a trauma derivative was strikingly different from the theories of all other psychoanalytic schools and seems to be responsible for Balint's understanding of regression as a sort of inverted process that enables the undoing of the sheltering structures of the mature mind. Balint's understanding of the relation between mature ego and regression diverged not only from the ego psychologists, who emphasized the idea of therapeutic alliance, but also from most of the authors who embraced the object-relational view, like Klein (who considered regression a manifestation of the patient's craving for oral gratification), Fairbairn (who gave up the notion of regression), and Guntrip (who viewed regression as a schizoid phenomenon related to the ego weakness). According to Balint, the clinical appearance of a regression would "depend also on the way the regression is recognized, is accepted, and is responded to by the analyst." In this respect, his position was close to Winnicott's reformulation of the therapeutic action. Yet, the work of Balint reflects the persuasion that the progressive fluidification of the solid structure could be enabled only by the analyst's capacity for becoming himself or herself [unsolid].

  4. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  5. Application of logistic regression to case-control association studies involving two causative loci.

    PubMed

    North, Bernard V; Curtis, David; Sham, Pak C

    2005-01-01

    Models in which two susceptibility loci jointly influence the risk of developing disease can be explored using logistic regression analysis. Comparison of likelihoods of models incorporating different sets of disease model parameters allows inferences to be drawn regarding the nature of the joint effect of the loci. We have simulated case-control samples generated assuming different two-locus models and then analysed them using logistic regression. We show that this method is practicable and that, for the models we have used, it can be expected to allow useful inferences to be drawn from sample sizes consisting of hundreds of subjects. Interactions between loci can be explored, but interactive effects do not exactly correspond with classical definitions of epistasis. We have particularly examined the issue of the extent to which it is helpful to utilise information from a previously identified locus when investigating a second, unknown locus. We show that for some models conditional analysis can have substantially greater power while for others unconditional analysis can be more powerful. Hence we conclude that in general both conditional and unconditional analyses should be performed when searching for additional loci.

  6. Detection of epistatic effects with logic regression and a classical linear regression model.

    PubMed

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  7. Comparison of a Classical and Quantum Based Restricted Boltzmann Machine (RBM) for Application to Non-linear Multivariate Regression.

    NASA Astrophysics Data System (ADS)

    Dorband, J. E.; Tilak, N.; Radov, A.

    2016-12-01

    In this paper, a classical computer implementation of RBM is compared to a quantum annealing based RBM running on a D-Wave 2X (an adiabatic quantum computer). The codes for both are essentially identical. Only a flag is set to change the activation function from a classically computed logistic function to the D-Wave. To obtain greater understanding of the behavior of the D-Wave, a study of the stochastic properties of a virtual qubit (a 12 qubit chain) and a cell of qubits (an 8 qubit cell) was performed. We will present the results of comparing the D-Wave implementation with a theoretically errorless adiabatic quantum computer. The main purpose of this study is to develop a generic RBM regression tool in order to infer CO2 fluxes from the NASA satellite OCO-2 observed CO2 concentrations and predicted atmospheric states using regression models. The carbon fluxes will then be assimilated into a land surface model to predict the Net Ecosystem Exchange at globally distributed regional sites.

  8. Equations for hydraulic conductivity estimation from particle size distribution: A dimensional analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ji-Peng; François, Bertrand; Lambert, Pierre

    2017-09-01

    Estimating hydraulic conductivity from particle size distribution (PSD) is an important issue for various engineering problems. Classical models such as Hazen model, Beyer model, and Kozeny-Carman model usually regard the grain diameter at 10% passing (d10) as an effective grain size and the effects of particle size uniformity (in Beyer model) or porosity (in Kozeny-Carman model) are sometimes embedded. This technical note applies the dimensional analysis (Buckingham's ∏ theorem) to analyze the relationship between hydraulic conductivity and particle size distribution (PSD). The porosity is regarded as a dependent variable on the grain size distribution in unconsolidated conditions. It indicates that the coefficient of grain size uniformity and a dimensionless group representing the gravity effect, which is proportional to the mean grain volume, are the main two determinative parameters for estimating hydraulic conductivity. Regression analysis is then carried out on a database comprising 431 samples collected from different depositional environments and new equations are developed for hydraulic conductivity estimation. The new equation, validated in specimens beyond the database, shows an improved prediction comparing to using the classic models.

  9. Playing-Related Musculoskeletal Disorders Among Classical Piano Students at Tertiary Institutions in Malaysia: Proportion and Associated Risk Factors.

    PubMed

    Ling, Chia-Ying; Loo, Fung-Chiat; Hamedon, Titi R

    2018-06-01

    Musicians are prone to performance injuries due to the nature of musical practice, and classical pianists are among the groups at high risk for playing-related musculoskeletal disorders (PRMDs). With the growing number of classical pianists in Malaysia, this study aimed to investigate the proportion of PRMDs occurring among classical piano students in tertiary institutions in Malaysia. Associations between gender, practice habits, diet, sports involvement, and PRMD were investigated. A survey was conducted among classical piano students (n=192) at tertiary institutions of Kuala Lumpur and Selangor. Results showed that 35.8% (n=68) students reported having PRMD. The shoulder was the most commonly affected body site, followed by the arm, finger, and wrist. Pain, fatigue, and stiffness were the most cited symptoms by those who suffered from a PRMD. Chi-square analysis showed a significant relationship between the occurrence of PRMD and practice hours (p=0.031), the habit of taking breaks during practice (p=0.045), physical cool-down exercises (p=0.037), and special diet (p=0.007). Multivariate logistic regression analyses confirmed the independent correlation between PRMDs and the lack of taking a break during practice, physical cool-down exercises, and special diet. Because PRMDs are reported at various severity levels, this study should increase awareness of PRMD among classical piano students and encourage injury prevention in musicians in the future to ensure long-lasting music careers.

  10. Three-dimensional texture analysis of contrast enhanced CT images for treatment response assessment in Hodgkin lymphoma: Comparison with F-18-FDG PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knogler, Thomas; El-Rabadi, Karem; Weber, Michael

    2014-12-15

    Purpose: To determine the diagnostic performance of three-dimensional (3D) texture analysis (TA) of contrast-enhanced computed tomography (CE-CT) images for treatment response assessment in patients with Hodgkin lymphoma (HL), compared with F-18-fludeoxyglucose (FDG) positron emission tomography/CT. Methods: 3D TA of 48 lymph nodes in 29 patients was performed on venous-phase CE-CT images before and after chemotherapy. All lymph nodes showed pathologically elevated FDG uptake at baseline. A stepwise logistic regression with forward selection was performed to identify classic CT parameters and texture features (TF) that enable the separation of complete response (CR) and persistent disease. Results: The TF fraction of imagemore » in runs, calculated for the 45° direction, was able to correctly identify CR with an accuracy of 75%, a sensitivity of 79.3%, and a specificity of 68.4%. Classical CT features achieved an accuracy of 75%, a sensitivity of 86.2%, and a specificity of 57.9%, whereas the combination of TF and CT imaging achieved an accuracy of 83.3%, a sensitivity of 86.2%, and a specificity of 78.9%. Conclusions: 3D TA of CE-CT images is potentially useful to identify nodal residual disease in HL, with a performance comparable to that of classical CT parameters. Best results are achieved when TA and classical CT features are combined.« less

  11. Treatment of singularities in a middle-crack tension specimen

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Raju, I. S.

    1990-01-01

    A three-dimensional finite-element analysis of a middle-crack tension specimen subjected to mode I loading was performed to study the stress singularity along the crack front. The specimen was modeled using 20-node isoparametric elements with collapsed nonsingular elements at the crack front. The displacements and stresses from the analysis were used to estimate the power of singularities, by a log-log regression analysis, along the crack front. Analyses showed that finite-sized cracked bodies have two singular stress fields. Because of two singular stress fields near the free surface and the classical square root singularity elsewhere, the strain energy release rate appears to be an appropriate parameter all along the crack front.

  12. Bayesian logistic regression in detection of gene-steroid interaction for cancer at PDLIM5 locus.

    PubMed

    Wang, Ke-Sheng; Owusu, Daniel; Pan, Yue; Xie, Changchun

    2016-06-01

    The PDZ and LIM domain 5 (PDLIM5) gene may play a role in cancer, bipolar disorder, major depression, alcohol dependence and schizophrenia; however, little is known about the interaction effect of steroid and PDLIM5 gene on cancer. This study examined 47 single-nucleotide polymorphisms (SNPs) within the PDLIM5 gene in the Marshfield sample with 716 cancer patients (any diagnosed cancer, excluding minor skin cancer) and 2848 noncancer controls. Multiple logistic regression model in PLINK software was used to examine the association of each SNP with cancer. Bayesian logistic regression in PROC GENMOD in SAS statistical software, ver. 9.4 was used to detect gene- steroid interactions influencing cancer. Single marker analysis using PLINK identified 12 SNPs associated with cancer (P< 0.05); especially, SNP rs6532496 revealed the strongest association with cancer (P = 6.84 × 10⁻³); while the next best signal was rs951613 (P = 7.46 × 10⁻³). Classic logistic regression in PROC GENMOD showed that both rs6532496 and rs951613 revealed strong gene-steroid interaction effects (OR=2.18, 95% CI=1.31-3.63 with P = 2.9 × 10⁻³ for rs6532496 and OR=2.07, 95% CI=1.24-3.45 with P = 5.43 × 10⁻³ for rs951613, respectively). Results from Bayesian logistic regression showed stronger interaction effects (OR=2.26, 95% CI=1.2-3.38 for rs6532496 and OR=2.14, 95% CI=1.14-3.2 for rs951613, respectively). All the 12 SNPs associated with cancer revealed significant gene-steroid interaction effects (P < 0.05); whereas 13 SNPs showed gene-steroid interaction effects without main effect on cancer. SNP rs4634230 revealed the strongest gene-steroid interaction effect (OR=2.49, 95% CI=1.5-4.13 with P = 4.0 × 10⁻⁴ based on the classic logistic regression and OR=2.59, 95% CI=1.4-3.97 from Bayesian logistic regression; respectively). This study provides evidence of common genetic variants within the PDLIM5 gene and interactions between PLDIM5 gene polymorphisms and steroid use influencing cancer.

  13. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  14. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  15. Fast Quantitative Analysis Of Museum Objects Using Laser-Induced Breakdown Spectroscopy And Multiple Regression Algorithms

    NASA Astrophysics Data System (ADS)

    Lorenzetti, G.; Foresta, A.; Palleschi, V.; Legnaioli, S.

    2009-09-01

    The recent development of mobile instrumentation, specifically devoted to in situ analysis and study of museum objects, allows the acquisition of many LIBS spectra in very short time. However, such large amount of data calls for new analytical approaches which would guarantee a prompt analysis of the results obtained. In this communication, we will present and discuss the advantages of statistical analytical methods, such as Partial Least Squares Multiple Regression algorithms vs. the classical calibration curve approach. PLS algorithms allows to obtain in real time the information on the composition of the objects under study; this feature of the method, compared to the traditional off-line analysis of the data, is extremely useful for the optimization of the measurement times and number of points associated with the analysis. In fact, the real time availability of the compositional information gives the possibility of concentrating the attention on the most `interesting' parts of the object, without over-sampling the zones which would not provide useful information for the scholars or the conservators. Some example on the applications of this method will be presented, including the studies recently performed by the researcher of the Applied Laser Spectroscopy Laboratory on museum bronze objects.

  16. Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.

    2013-05-01

    Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.

  17. Fast function-on-scalar regression with penalized basis expansions.

    PubMed

    Reiss, Philip T; Huang, Lei; Mennes, Maarten

    2010-01-01

    Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.

  18. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  19. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  20. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  1. A Critical Examination of Figure of Merit (FOM). Assessing the Goodness-of-Fit in Gamma/X-ray Peak Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, S.; Favalli, Andrea; Weaver, Brian Phillip

    2015-10-06

    In this paper we develop and investigate several criteria for assessing how well a proposed spectral form fits observed spectra. We consider the classical improved figure of merit (FOM) along with several modifications, as well as criteria motivated by Poisson regression from the statistical literature. We also develop a new FOM that is based on the statistical idea of the bootstrap. A spectral simulator has been developed to assess the performance of these different criteria under multiple data configurations.

  2. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  3. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  4. Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    ERIC Educational Resources Information Center

    He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne

    2018-01-01

    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…

  5. Regression Methods for Categorical Dependent Variables: Effects on a Model of Student College Choice

    ERIC Educational Resources Information Center

    Rapp, Kelly E.

    2012-01-01

    The use of categorical dependent variables with the classical linear regression model (CLRM) violates many of the model's assumptions and may result in biased estimates (Long, 1997; O'Connell, Goldstein, Rogers, & Peng, 2008). Many dependent variables of interest to educational researchers (e.g., professorial rank, educational attainment) are…

  6. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  7. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  8. A Gibbs sampler for Bayesian analysis of site-occupancy data

    USGS Publications Warehouse

    Dorazio, Robert M.; Rodriguez, Daniel Taylor

    2012-01-01

    1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.

  9. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  10. Use of partial least squares regression for the multivariate calibration of hazardous air pollutants in open-path FT-IR spectrometry

    NASA Astrophysics Data System (ADS)

    Hart, Brian K.; Griffiths, Peter R.

    1998-06-01

    Partial least squares (PLS) regression has been evaluated as a robust calibration technique for over 100 hazardous air pollutants (HAPs) measured by open path Fourier transform infrared (OP/FT-IR) spectrometry. PLS has the advantage over the current recommended calibration method of classical least squares (CLS), in that it can look at the whole useable spectrum (700-1300 cm-1, 2000-2150 cm-1, and 2400-3000 cm-1), and detect several analytes simultaneously. Up to one hundred HAPs synthetically added to OP/FT-IR backgrounds have been simultaneously calibrated and detected using PLS. PLS also has the advantage in requiring less preprocessing of spectra than that which is required in CLS calibration schemes, allowing PLS to provide user independent real-time analysis of OP/FT-IR spectra.

  11. EMD-regression for modelling multi-scale relationships, and application to weather-related cardiovascular mortality

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.

    2018-01-01

    In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.

  12. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  13. Safety analysis of urban arterials at the meso level.

    PubMed

    Li, Jia; Wang, Xuesong

    2017-11-01

    Urban arterials form the main structure of street networks. They typically have multiple lanes, high traffic volume, and high crash frequency. Classical crash prediction models investigate the relationship between arterial characteristics and traffic safety by treating road segments and intersections as isolated units. This micro-level analysis does not work when examining urban arterial crashes because signal spacing is typically short for urban arterials, and there are interactions between intersections and road segments that classical models do not accommodate. Signal spacing also has safety effects on both intersections and road segments that classical models cannot fully account for because they allocate crashes separately to intersections and road segments. In addition, classical models do not consider the impact on arterial safety of the immediately surrounding street network pattern. This study proposes a new modeling methodology that will offer an integrated treatment of intersections and road segments by combining signalized intersections and their adjacent road segments into a single unit based on road geometric design characteristics and operational conditions. These are called meso-level units because they offer an analytical approach between micro and macro. The safety effects of signal spacing and street network pattern were estimated for this study based on 118 meso-level units obtained from 21 urban arterials in Shanghai, and were examined using CAR (conditional auto regressive) models that corrected for spatial correlation among the units within individual arterials. Results showed shorter arterial signal spacing was associated with higher total and PDO (property damage only) crashes, while arterials with a greater number of parallel roads were associated with lower total, PDO, and injury crashes. The findings from this study can be used in the traffic safety planning, design, and management of urban arterials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    PubMed

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Sperm Retrieval in Patients with Klinefelter Syndrome: A Skewed Regression Model Analysis.

    PubMed

    Chehrazi, Mohammad; Rahimiforoushani, Abbas; Sabbaghian, Marjan; Nourijelyani, Keramat; Sadighi Gilani, Mohammad Ali; Hoseini, Mostafa; Vesali, Samira; Yaseri, Mehdi; Alizadeh, Ahad; Mohammad, Kazem; Samani, Reza Omani

    2017-01-01

    The most common chromosomal abnormality due to non-obstructive azoospermia (NOA) is Klinefelter syndrome (KS) which occurs in 1-1.72 out of 500-1000 male infants. The probability of retrieving sperm as the outcome could be asymmetrically different between patients with and without KS, therefore logistic regression analysis is not a well-qualified test for this type of data. This study has been designed to evaluate skewed regression model analysis for data collected from microsurgical testicular sperm extraction (micro-TESE) among azoospermic patients with and without non-mosaic KS syndrome. This cohort study compared the micro-TESE outcome between 134 men with classic KS and 537 men with NOA and normal karyotype who were referred to Royan Institute between 2009 and 2011. In addition to our main outcome, which was sperm retrieval, we also used logistic and skewed regression analyses to compare the following demographic and hormonal factors: age, level of follicle stimulating hormone (FSH), luteinizing hormone (LH), and testosterone between the two groups. A comparison of the micro-TESE between the KS and control groups showed a success rate of 28.4% (38/134) for the KS group and 22.2% (119/537) for the control group. In the KS group, a significantly difference (P<0.001) existed between testosterone levels for the successful sperm retrieval group (3.4 ± 0.48 mg/mL) compared to the unsuccessful sperm retrieval group (2.33 ± 0.23 mg/mL). The index for quasi Akaike information criterion (QAIC) had a goodness of fit of 74 for the skewed model which was lower than logistic regression (QAIC=85). According to the results, skewed regression is more efficient in estimating sperm retrieval success when the data from patients with KS are analyzed. This finding should be investigated by conducting additional studies with different data structures.

  16. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  18. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  19. Oxygen Costs of the Incremental Shuttle Walk Test in Cardiac Rehabilitation Participants: An Historical and Contemporary Analysis.

    PubMed

    Buckley, John P; Cardoso, Fernando M F; Birkett, Stefan T; Sandercock, Gavin R H

    2016-12-01

    The incremental shuttle walk test (ISWT) is a standardised assessment for cardiac rehabilitation. Three studies have reported oxygen costs (VO 2 )/metabolic equivalents (METs) of the ISWT. In spite of classic representations from these studies graphically showing curvilinear VO 2 responses to incremented walking speeds, linear regression techniques (also used by the American College of Sports Medicine [ACSM]) have been used to estimate VO 2 . The two main aims of this study were to (i) resolve currently reported discrepancies in the ISWT VO 2 -walking speed relationship, and (ii) derive an appropriate VO 2 versus walking speed regression equation. VO 2 was measured continuously during an ISWT in 32 coronary heart disease [cardiac] rehabilitation (CHD-CR) participants and 30 age-matched controls. Both CHD-CR and control group VO 2 responses were curvilinear in nature. For CHD-CR VO 2  = 4.4e 0.23 × walkingspeed (km/h) . The integrated area under the curve (iAUC) VO 2 across nine ISWT stages was greater in the CHD-CR group versus the control group (p < 0.001): CHD-CR = 423 (±86) ml·kg -1 ·min -1 ·km·h -1 ; control = 316 (±52) ml·kg -1 ·min -1 ·km·h -1 . CHD-CR group vs. control VO 2 was up to 30 % greater at higher ISWT stages. The curvilinear nature of VO 2 responses during the ISWT concur with classic studies reported over 100 years. VO 2 estimates for walking using linear regression models (including the ACSM) clearly underestimate values in healthy and CHD-CR participants, and this study provides a resolution to this when the ISWT is used for CHD-CR populations.

  20. Radiation-Induced Liver Injury in Three-Dimensional Conformal Radiation Therapy (3D-CRT) for Postoperative or Locoregional Recurrent Gastric Cancer: Risk Factors and Dose Limitations.

    PubMed

    Li, Guichao; Wang, Jiazhou; Hu, Weigang; Zhang, Zhen

    2015-01-01

    This study examined the status of radiation-induced liver injury in adjuvant or palliative gastric cancer radiation therapy (RT), identified risk factors of radiation-induced liver injury in gastric cancer RT, analysed the dose-volume effects of liver injury, and developed a liver dose limitation reference for gastric cancer RT. Data for 56 post-operative gastric cancer patients and 6 locoregional recurrent gastric cancer patients treated with three-dimensional conformal radiation therapy (3D-CRT) or intensity-modulated radiation therapy (IMRT) from Sep 2007 to Sep 2009 were analysed. Forty patients (65%) were administered concurrent chemotherapy. Pre- and post-radiation chemotherapy were given to 61 patients and 43 patients, respectively. The radiation dose was 45-50.4 Gy in 25-28 fractions. Clinical parameters, including gender, age, hepatic B virus status, concurrent chemotherapy, and the total number of chemotherapy cycles, were included in the analysis. Univariate analyses with a non-parametric rank test (Mann-Whitney test) and logistic regression test and a multivariate analysis using a logistic regression test were completed. We also analysed the correlation between RT and the changes in serum chemistry parameters [including total bilirubin, (TB), direct bilirubin (D-TB), alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST) and serum albumin (ALB)] after RT. The Child-Pugh grade progressed from grade A to grade B after radiotherapy in 10 patients. A total of 16 cases of classic radiation-induced liver disease (RILD) were observed, and 2 patients had both Child-Pugh grade progression and classic RILD. No cases of non-classic radiation liver injury occurred in the study population. Among the tested clinical parameters, the total number of chemotherapy cycles correlated with liver function injury. V35 and ALP levels were significant predictive factors for radiation liver injury. In 3D-CRT for gastric cancer patients, radiation-induced liver injury may occur and affect the overall treatment plan. The total number of chemotherapy cycles correlated with liver function injury, and V35 and ALP are significant predictive factors for radiation-induced liver injury. Our dose limitation reference for liver protection is feasible.

  1. British isles lupus assessment group 2004 index is valid for assessment of disease activity in systemic lupus erythematosus

    PubMed Central

    Yee, Chee-Seng; Farewell, Vernon; Isenberg, David A; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Gordon, Caroline

    2007-01-01

    Objective To determine the construct and criterion validity of the British Isles Lupus Assessment Group 2004 (BILAG-2004) index for assessing disease activity in systemic lupus erythematosus (SLE). Methods Patients with SLE were recruited into a multicenter cross-sectional study. Data on SLE disease activity (scores on the BILAG-2004 index, Classic BILAG index, and Systemic Lupus Erythematosus Disease Activity Index 2000 [SLEDAI-2K]), investigations, and therapy were collected. Overall BILAG-2004 and overall Classic BILAG scores were determined by the highest score achieved in any of the individual systems in the respective index. Erythrocyte sedimentation rates (ESRs), C3 levels, C4 levels, anti–double-stranded DNA (anti-dsDNA) levels, and SLEDAI-2K scores were used in the analysis of construct validity, and increase in therapy was used as the criterion for active disease in the analysis of criterion validity. Statistical analyses were performed using ordinal logistic regression for construct validity and logistic regression for criterion validity. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Results Of the 369 patients with SLE, 92.7% were women, 59.9% were white, 18.4% were Afro-Caribbean and 18.4% were South Asian. Their mean ± SD age was 41.6 ± 13.2 years and mean disease duration was 8.8 ± 7.7 years. More than 1 assessment was obtained on 88.6% of the patients, and a total of 1,510 assessments were obtained. Increasing overall scores on the BILAG-2004 index were associated with increasing ESRs, decreasing C3 levels, decreasing C4 levels, elevated anti-dsDNA levels, and increasing SLEDAI-2K scores (all P < 0.01). Increase in therapy was observed more frequently in patients with overall BILAG-2004 scores reflecting higher disease activity. Scores indicating active disease (overall BILAG-2004 scores of A and B) were significantly associated with increase in therapy (odds ratio [OR] 19.3, P < 0.01). The BILAG-2004 and Classic BILAG indices had comparable sensitivity, specificity, PPV, and NPV. Conclusion These findings show that the BILAG-2004 index has construct and criterion validity. PMID:18050213

  2. Pembrolizumab versus the standard of care for relapsed and refractory classical Hodgkin's lymphoma progressing after brentuximab vedotin: an indirect treatment comparison.

    PubMed

    Keeping, Sam; Wu, Elise; Chan, Keith; Mojebi, Ali; Ferrante, Shannon Allen; Balakumaran, Arun

    2018-05-15

    There is significant unmet need among patients with relapsed and refractory classical Hodgkin's lymphoma (RRcHL) who have failed multiple lines of therapy, including brentuximab vedotin (BV). Pembrolizumab, an immune checkpoint inhibitor, is one possible treatment solution for this population. The objective of this study was to compare progression-free survival (PFS) with standard of care (SOC) versus pembrolizumab in previously BV treated RRcHL patients. A systematic literature review identified one observational study (Cheah et al., 2016) of SOC that was suitable for comparison with KEYNOTE-087, the principal trial of pembrolizumab in this population. Both naïve and population-adjusted (using outcomes regression) pairwise indirect comparisons were conducted. The primary analysis included all patients who had failed BV, with a secondary analysis conducted including only those known to have failed BV that was part of definitive treatment. In the primary analysis, SOC was inferior to pembrolizumab in both the unadjusted comparison (HR 5.00 [95% confidence interval (CI) 3.56-7.01]) and the adjusted comparison (HR 6.35 [95% CI 4.04-9.98]). These HRs increased to 5.16 (95% CI 3.61-7.38) and 6.56 (95% CI 4.01-10.72), respectively, in the secondary analysis. Pembrolizumab offers a significant improvement in PFS compared to SOC in this population.

  3. Exploring the socio-emotional factors associated with subjective well-being in the unemployed

    PubMed Central

    Extremera, Natalio; Nieto-Flores, M. Pilar

    2016-01-01

    In this study, we examined the relations between dimensions of Perceived Emotional Intelligence (PEI) and classic constructs, such as social support, on depression, stress, and subjective well-being indicators (life satisfaction and happiness). The study also sought to determine whether PEI dimensions accounted for a significant portion of the variance beyond that of classic constructs in the study of depression, stress, and well-being outcomes in a sample of 442 unemployed subjects. Results indicated that social support and all PEI dimensions are found to be significant and negatively related to depression and stress, and these variables were also found to be significant and positively associated with life satisfaction and happiness. Additionally, results using regression analysis indicated that PEI, and specifically use of emotions and regulation of emotions, explain a significant amount of the variance of all outcomes after controlling for socio-demographics and social support dimensions. Finally, theoretical and practical implications of these constructs and their relation with psychological adjustment and well-being in unemployed people are discussed. PMID:27761319

  4. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    PubMed

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Geodesic least squares regression on information manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be

    We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply thismore » to scaling laws in magnetic confinement fusion.« less

  6. Sonographically guided intrasheath percutaneous release of the first annular pulley for trigger digits, part 2: randomized comparative study of the economic impact of 3 surgical models.

    PubMed

    Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero

    2012-03-01

    Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.

  7. Searching new signals for production traits through gene-based association analysis in three Italian cattle breeds.

    PubMed

    Capomaccio, Stefano; Milanesi, Marco; Bomba, Lorenzo; Cappelli, Katia; Nicolazzi, Ezequiel L; Williams, John L; Ajmone-Marsan, Paolo; Stefanon, Bruno

    2015-08-01

    Genome-wide association studies (GWAS) have been widely applied to disentangle the genetic basis of complex traits. In cattle breeds, classical GWAS approaches with medium-density marker panels are far from conclusive, especially for complex traits. This is due to the intrinsic limitations of GWAS and the assumptions that are made to step from the association signals to the functional variations. Here, we applied a gene-based strategy to prioritize genotype-phenotype associations found for milk production and quality traits with classical approaches in three Italian dairy cattle breeds with different sample sizes (Italian Brown n = 745; Italian Holstein n = 2058; Italian Simmental n = 477). Although classical regression on single markers revealed only a single genome-wide significant genotype-phenotype association, for Italian Holstein, the gene-based approach identified specific genes in each breed that are associated with milk physiology and mammary gland development. As no standard method has yet been established to step from variation to functional units (i.e., genes), the strategy proposed here may contribute to revealing new genes that play significant roles in complex traits, such as those investigated here, amplifying low association signals using a gene-centric approach. © 2015 Stichting International Foundation for Animal Genetics.

  8. Regression Rates Following the Treatment of Aggressive Posterior Retinopathy of Prematurity with Bevacizumab Versus Laser: 8-Year Retrospective Analysis

    PubMed Central

    Nicoară, Simona D.; Ştefănuţ, Anne C.; Nascutzy, Constanta; Zaharie, Gabriela C.; Toader, Laura E.; Drugan, Tudor C.

    2016-01-01

    Background Retinopathy is a serious complication related to prematurity and a leading cause of childhood blindness. The aggressive posterior form of retinopathy of prematurity (APROP) has a worse anatomical and functional outcome following laser therapy, as compared with the classic form of the disease. The main outcome measures are the APROP regression rate, structural outcomes, and complications associated with intravitreal bevacizumab (IVB) versus laser photocoagulation in APROP. Material/Methods This is a retrospective case series that includes infants with APROP who received either IVB or laser photocoagulation and had a follow-up of at least 60 weeks (for the laser photocoagulation group) and 80 weeks (for the IVB group). In the first group, laser photocoagulation of the retina was carried out and in the second group, 1 bevacizumab injection was administered intravitreally. The following parameters were analyzed in each group: sex, gestational age, birth weight, postnatal age and postmenstrual age at treatment, APROP regression, sequelae, and complications. Statistical analysis was performed using Microsoft Excel and IBM SPSS (version 23.0). Results The laser photocoagulation group consisted of 6 premature infants (12 eyes) and the IVB group consisted of 17 premature infants (34 eyes). Within the laser photocoagulation group, the evolution was favorable in 9 eyes (75%) and unfavorable in 3 eyes (25%). Within the IVB group, APROP regressed in 29 eyes (85.29%) and failed to regress in 5 eyes (14.71%). These differences are statistically significant, as proved by the McNemar test (P<0.001). Conclusions The IVB group had a statistically significant better outcome compared with the laser photocoagulation group, in APROP in our series. PMID:27062023

  9. Biodiversity patterns along ecological gradients: unifying β-diversity indices.

    PubMed

    Szava-Kovats, Robert C; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients.

  10. Biodiversity Patterns along Ecological Gradients: Unifying β-Diversity Indices

    PubMed Central

    Szava-Kovats, Robert C.; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients. PMID:25330181

  11. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  12. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  13. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  14. Observed ground-motion variabilities and implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.

    2016-12-01

    One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.

  15. Portfolio Analysis for Vector Calculus

    ERIC Educational Resources Information Center

    Kaplan, Samuel R.

    2015-01-01

    Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…

  16. Effect of the Modified Glasgow Coma Scale Score Criteria for Mild Traumatic Brain Injury on Mortality Prediction: Comparing Classic and Modified Glasgow Coma Scale Score Model Scores of 13

    PubMed Central

    Mena, Jorge Humberto; Sanchez, Alvaro Ignacio; Rubiano, Andres M.; Peitzman, Andrew B.; Sperry, Jason L.; Gutierrez, Maria Isabel; Puyana, Juan Carlos

    2011-01-01

    Objective The Glasgow Coma Scale (GCS) classifies Traumatic Brain Injuries (TBI) as Mild (14–15); Moderate (9–13) or Severe (3–8). The ATLS modified this classification so that a GCS score of 13 is categorized as mild TBI. We investigated the effect of this modification on mortality prediction, comparing patients with a GCS of 13 classified as moderate TBI (Classic Model) to patients with GCS of 13 classified as mild TBI (Modified Model). Methods We selected adult TBI patients from the Pennsylvania Outcome Study database (PTOS). Logistic regressions adjusting for age, sex, cause, severity, trauma center level, comorbidities, and isolated TBI were performed. A second evaluation included the time trend of mortality. A third evaluation also included hypothermia, hypotension, mechanical ventilation, screening for drugs, and severity of TBI. Discrimination of the models was evaluated using the area under receiver operating characteristic curve (AUC). Calibration was evaluated using the Hoslmer-Lemershow goodness of fit (GOF) test. Results In the first evaluation, the AUCs were 0.922 (95 %CI, 0.917–0.926) and 0.908 (95 %CI, 0.903–0.912) for classic and modified models, respectively. Both models showed poor calibration (p<0.001). In the third evaluation, the AUCs were 0.946 (95 %CI, 0.943 – 0.949) and 0.938 (95 %CI, 0.934 –0.940) for the classic and modified models, respectively, with improvements in calibration (p=0.30 and p=0.02 for the classic and modified models, respectively). Conclusion The lack of overlap between ROC curves of both models reveals a statistically significant difference in their ability to predict mortality. The classic model demonstrated better GOF than the modified model. A GCS of 13 classified as moderate TBI in a multivariate logistic regression model performed better than a GCS of 13 classified as mild. PMID:22071923

  17. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  18. Accuracy of specific BIVA for the assessment of body composition in the United States population.

    PubMed

    Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C; Marini, Elisabetta

    2013-01-01

    Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old) derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th) and above the 95(th) percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.

  19. Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.

    PubMed

    Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa

    2016-03-01

    In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. TWSVR: Regression via Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh

    2016-02-01

    Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Semi-parametric regression model for survival data: graphical visualization with R

    PubMed Central

    2016-01-01

    Cox proportional hazards model is a semi-parametric model that leaves its baseline hazard function unspecified. The rationale to use Cox proportional hazards model is that (I) the underlying form of hazard function is stringent and unrealistic, and (II) researchers are only interested in estimation of how the hazard changes with covariate (relative hazard). Cox regression model can be easily fit with coxph() function in survival package. Stratified Cox model may be used for covariate that violates the proportional hazards assumption. The relative importance of covariates in population can be examined with the rankhazard package in R. Hazard ratio curves for continuous covariates can be visualized using smoothHR package. This curve helps to better understand the effects that each continuous covariate has on the outcome. Population attributable fraction is a classic quantity in epidemiology to evaluate the impact of risk factor on the occurrence of event in the population. In survival analysis, the adjusted/unadjusted attributable fraction can be plotted against survival time to obtain attributable fraction function. PMID:28090517

  2. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Prognostic scores in oesophageal or gastric variceal bleeding.

    PubMed

    Ohmann, C; Stöltzing, H; Wins, L; Busch, E; Thon, K

    1990-05-01

    Numerous scoring systems have been developed for the prediction of outcome of variceal bleeding; however, only a few have been evaluated adequately. The object of this study was to improve the classical Child-Pugh score (CPS) and to test other scores from the literature. Patients (n = 82) with endoscopically confirmed variceal bleeding and long-term sclerotherapy were included in the study. Linear logistic regression (LR) was applied to different sets of prognostic variables with regard to 30-day mortality. In addition, scores from the literature were evaluated on the data set. Performance was measured by the accuracy and receiver-operating characteristic curves. The application of LR to all five CPS variables (accuracy, 80%) was superior to the classical CPS (70%). LR with selection from the CPS variables or from other sets of variables resulted in no improvement. Compared with CPS only three scores from the literature, mainly based on subsets of the CPS variables, showed an improved accuracy. It is concluded that CPS is still a good scoring system; however, it can be improved by statistical analysis using the same variables.

  4. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    ERIC Educational Resources Information Center

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  5. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  6. Predicting seasonal influenza transmission using functional regression models with temporal dependence.

    PubMed

    Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela

    2018-01-01

    This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.

  7. Application of principal component regression and artificial neural network in FT-NIR soluble solids content determination of intact pear fruit

    NASA Astrophysics Data System (ADS)

    Ying, Yibin; Liu, Yande; Fu, Xiaping; Lu, Huishan

    2005-11-01

    The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. However, majority of today's applications of ANNs is back-propagate feed-forward ANN (BP-ANN). In this paper, back-propagation artificial neural networks (BP-ANN) were applied for modeling soluble solid content (SSC) of intact pear from their Fourier transform near infrared (FT-NIR) spectra. One hundred and sixty-four pear samples were used to build the calibration models and evaluate the models predictive ability. The results are compared to the classical calibration approaches, i.e. principal component regression (PCR), partial least squares (PLS) and non-linear PLS (NPLS). The effects of the optimal methods of training parameters on the prediction model were also investigated. BP-ANN combine with principle component regression (PCR) resulted always better than the classical PCR, PLS and Weight-PLS methods, from the point of view of the predictive ability. Based on the results, it can be concluded that FT-NIR spectroscopy and BP-ANN models can be properly employed for rapid and nondestructive determination of fruit internal quality.

  8. Factors associated with parasite dominance in fishes from Brazil.

    PubMed

    Amarante, Cristina Fernandes do; Tassinari, Wagner de Souza; Luque, Jose Luis; Pereira, Maria Julia Salim

    2016-06-14

    The present study used regression models to evaluate the existence of factors that may influence the numerical parasite dominance with an epidemiological approximation. A database including 3,746 fish specimens and their respective parasites were used to evaluate the relationship between parasite dominance and biotic characteristics inherent to the studied hosts and the parasite taxa. Multivariate, classical, and mixed effects linear regression models were fitted. The calculations were performed using R software (95% CI). In the fitting of the classical multiple linear regression model, freshwater and planktivorous fish species and body length, as well as the species of the taxa Trematoda, Monogenea, and Hirudinea, were associated with parasite dominance. However, the fitting of the mixed effects model showed that the body length of the host and the species of the taxa Nematoda, Trematoda, Monogenea, Hirudinea, and Crustacea were significantly associated with parasite dominance. Studies that consider specific biological aspects of the hosts and parasites should expand the knowledge regarding factors that influence the numerical dominance of fish in Brazil. The use of a mixed model shows, once again, the importance of the appropriate use of a model correlated with the characteristics of the data to obtain consistent results.

  9. Analysis of the impact of immigration on labour market using spatial models

    NASA Astrophysics Data System (ADS)

    Polonyankina, Tatiana

    2017-07-01

    This paper investigates the impact of immigration on employment and unemployment of a host country. The question to answer is: How does employment/unemployment in the host country change after an increase in number of immigrants? The analysis is taking into account only legal immigrants in recession period. The model is combining classical regression of cross-sectional data with spatial econometrics models where cross-section dependencies are captured by a spatial matrix. The intention is by using spatial models analyse the sensitivity of employment/unemployment rate on change in a share of immigration in a region. The used panel data are based on the Labour force survey and on available macro data in Eurostat for 3 European countries (Germany, Austria and Czech Republic) grouped into cells by NUTS regions in a recession period.

  10. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  11. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  12. Burnout does not help predict depression among French school teachers.

    PubMed

    Bianchi, Renzo; Schonfeld, Irvin Sam; Laurent, Eric

    2015-11-01

    Burnout has been viewed as a phase in the development of depression. However, supportive research is scarce. We examined whether burnout predicted depression among French school teachers. We conducted a 2-wave, 21-month study involving 627 teachers (73% female) working in French primary and secondary schools. Burnout was assessed with the Maslach Burnout Inventory and depression with the 9-item depression module of the Patient Health Questionnaire (PHQ-9). The PHQ-9 grades depressive symptom severity and provides a provisional diagnosis of major depression. Depression was treated both as a continuous and categorical variable using linear and logistic regression analyses. We controlled for gender, age, and length of employment. Controlling for baseline depressive symptoms, linear regression analysis showed that burnout symptoms at time 1 (T1) did not predict depressive symptoms at time 2 (T2). Baseline depressive symptoms accounted for about 88% of the association between T1 burnout and T2 depressive symptoms. Only baseline depressive symptoms predicted depressive symptoms at follow-up. Similarly, logistic regression analysis revealed that burnout symptoms at T1 did not predict incident cases of major depression at T2 when depressive symptoms at T1 were included in the predictive model. Only baseline depressive symptoms predicted cases of major depression at follow-up. This study does not support the view that burnout is a phase in the development of depression. Assessing burnout symptoms in addition to "classical" depressive symptoms may not always improve our ability to predict future depression.

  13. Outcome predictors in the management of intramedullary classic ependymoma: An integrative survival analysis.

    PubMed

    Wang, Yinqing; Cai, Ranze; Wang, Rui; Wang, Chunhua; Chen, Chunmei

    2018-06-01

    This is a retrospective study.The aim of this study was to illustrate the survival outcomes of patients with classic ependymoma (CE) and identify potential prognostic factors.CE is the most common category of spinal ependymomas, but few published studies have discussed predictors of the survival outcome.A Boolean search of the PubMed, Embase, and OVID databases was conducted by 2 investigators independently. The objects were intramedullary grade II ependymoma according to 2007 WHO classification. Univariate Kaplan-Meier analysis and Log-Rank tests were performed to identify variables associated with progression-free survival (PFS) or overall survival (OS). Multivariate Cox regression was performed to assess hazard ratios (HRs) with 95% confidence intervals (95% CIs). Statistical analysis was performed by SPSS version 23.0 (IBM Corp.) with statistical significance defined as P < .05.A total of 35 studies were identified, including 169 cases of CE. The mean follow-up time across cases was 64.2 ± 51.5 months. Univariate analysis showed that patients who had undergone total resection (TR) had better PFS and OS than those with subtotal resection (STR) and biopsy (P = .002, P = .004, respectively). Within either univariate or multivariate analysis (P = .000, P = .07, respectively), histological type was an independent prognostic factor for PFS of CE [papillary type: HR 0.002, 95% CI (0.000-0.073), P = .001, tanycytic type: HR 0.010, 95% CI (0.000-0.218), P = .003].It was the first integrative analysis of CE to elucidate the correlation between kinds of factors and prognostic outcomes. Definite histological type and safely TR were foundation of CE's management. 4.

  14. Bounded-Influence Inference in Regression.

    DTIC Science & Technology

    1984-02-01

    be viewed as generalization of the classical F-test. By means of the influence function their robustness properties are investigated and optimally...robust tests that maximize the asymptotic power within each class, under the side condition of a bounded influence function , are constructed. Finally, an

  15. Analysis of Classical Time-Trial Performance and Technique-Specific Physiological Determinants in Elite Female Cross-Country Skiers.

    PubMed

    Sandbakk, Øyvind; Losnegard, Thomas; Skattebo, Øyvind; Hegge, Ann M; Tønnessen, Espen; Kocbach, Jan

    2016-01-01

    The present study investigated the contribution of performance on uphill, flat, and downhill sections to overall performance in an international 10-km classical time-trial in elite female cross-country skiers, as well as the relationships between performance on snow and laboratory-measured physiological variables in the double poling (DP) and diagonal (DIA) techniques. Ten elite female cross-country skiers were continuously measured by a global positioning system device during an international 10-km cross-country skiing time-trial in the classical technique. One month prior to the race, all skiers performed a 5-min submaximal and 3-min self-paced performance test while roller skiing on a treadmill, both in the DP and DIA techniques. The time spent on uphill (r = 0.98) and flat (r = 0.91) sections of the race correlated most strongly with the overall 10-km performance (both p < 0.05). Approximately 56% of the racing time was spent uphill, and stepwise multiple regression revealed that uphill time explained 95.5% of the variance in overall performance (p < 0.001). Distance covered during the 3-min roller-skiing test and body-mass normalized peak oxygen uptake (VO2peak) in both techniques showed the strongest correlations with overall time-trial performance (r = 0.66-0.78), with DP capacity tending to have greatest impact on the flat and DIA capacity on uphill terrain (all p < 0.05). Our present findings reveal that the time spent uphill most strongly determine classical time-trial performance, and that the major portion of the performance differences among elite female cross-country skiers can be explained by variations in technique-specific aerobic power.

  16. Analysis of Classical Time-Trial Performance and Technique-Specific Physiological Determinants in Elite Female Cross-Country Skiers

    PubMed Central

    Sandbakk, Øyvind; Losnegard, Thomas; Skattebo, Øyvind; Hegge, Ann M.; Tønnessen, Espen; Kocbach, Jan

    2016-01-01

    The present study investigated the contribution of performance on uphill, flat, and downhill sections to overall performance in an international 10-km classical time-trial in elite female cross-country skiers, as well as the relationships between performance on snow and laboratory-measured physiological variables in the double poling (DP) and diagonal (DIA) techniques. Ten elite female cross-country skiers were continuously measured by a global positioning system device during an international 10-km cross-country skiing time-trial in the classical technique. One month prior to the race, all skiers performed a 5-min submaximal and 3-min self-paced performance test while roller skiing on a treadmill, both in the DP and DIA techniques. The time spent on uphill (r = 0.98) and flat (r = 0.91) sections of the race correlated most strongly with the overall 10-km performance (both p < 0.05). Approximately 56% of the racing time was spent uphill, and stepwise multiple regression revealed that uphill time explained 95.5% of the variance in overall performance (p < 0.001). Distance covered during the 3-min roller-skiing test and body-mass normalized peak oxygen uptake (VO2peak) in both techniques showed the strongest correlations with overall time-trial performance (r = 0.66–0.78), with DP capacity tending to have greatest impact on the flat and DIA capacity on uphill terrain (all p < 0.05). Our present findings reveal that the time spent uphill most strongly determine classical time-trial performance, and that the major portion of the performance differences among elite female cross-country skiers can be explained by variations in technique-specific aerobic power. PMID:27536245

  17. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    2001-10-01

    - The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  18. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  19. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  20. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  1. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  2. Construction of the Second Quito Astrolabe Catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.

    1994-03-01

    A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.

  3. Tectonically controlled sedimentation: impact on sediment supply and basin evolution of the Kashafrud Formation (Middle Jurassic, Kopeh-Dagh Basin, northeast Iran)

    NASA Astrophysics Data System (ADS)

    Sardar Abadi, Mehrdad; Da Silva, Anne-Christine; Amini, Abdolhossein; Aliabadi, Ali Akbar; Boulvain, Frédéric; Sardar Abadi, Mohammad Hossein

    2014-11-01

    The Kashafrud Formation was deposited in the extensional Kopeh-Dagh Basin during the Late Bajocian to Bathonian (Middle Jurassic) and is potentially the most important siliciclastic unit from NE Iran for petroleum geology. This extensional setting allowed the accumulation of about 1,700 m of siliciclastic sediments during a limited period of time (Upper Bajocian-Bathonian). Here, we present a detailed facies analysis combined with magnetic susceptibility (MS) results focusing on the exceptional record of the Pol-e-Gazi section in the southeastern part of the basin. MS is classically interpreted as related to the amount of detrital input. The amount of these detrital inputs and then the MS being classically influenced by sea-level changes, climate changes and tectonic activity. Facies analysis reveals that the studied rocks were deposited in shallow marine, slope to pro-delta settings. A major transgressive-regressive cycle is recorded in this formation, including fluvial-dominated delta to turbiditic pro-delta settings (transgressive phase), followed by siliciclastic to mixed siliciclastic and carbonate shoreface rocks (regressive phase). During the transgressive phase, hyperpycnal currents were feeding the basin. These hyperpycnal currents are interpreted as related to important tectonic variations, in relation to significant uplift of the hinterland during opening of the basin. This tectonic activity was responsible for stronger erosion, providing a higher amount of siliciclastic input into the basin, leading to a high MS signal. During the regressive phase, the tectonic activity strongly decreased. Furthermore, the depositional setting changed to a wave- to tide-dominated, mixed carbonate-siliciclastic setting. Because of the absence of strong tectonic variations, bulk MS was controlled by other factors such as sea-level and climatic changes. Fluctuations in carbonate production, possibly related to sea-level variations, influenced the MS of the siliciclastic/carbonate cycles. Carbonate intervals are characterized by a strong decrease of MS values indicates a gradual reduction of detrital influx. Therefore, the intensity of tectonic movement is thought to be the dominant factor in controlling sediment supply, changes in accommodation space and modes of deposition throughout the Middle Jurassic sedimentary succession in the Pol-e-Gazi section and possibly in the Kopeh-Dagh Basin in general.

  4. [Risk factor analysis of the patients with solitary pulmonary nodules and establishment of a prediction model for the probability of malignancy].

    PubMed

    Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H

    2018-02-23

    Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P <0.05). We calculated a prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater than other models (Mayo 0.696, VA 0.634, LiYun 0.681), while the differences between any two of the four models were not significant ( P >0.05). Conclusions: Age of patient, gender, history of tumor, ground-glass opacity, maximum diameter and speculation are independent predictors of malignancy in patients with solitary pulmonary nodule. This logistic regression prediction mathematic model is not inferior to those classical models in estimating the prognosis of SPNs.

  5. Applications of modern statistical methods to analysis of data in physical science

    NASA Astrophysics Data System (ADS)

    Wicker, James Eric

    Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.

  6. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  7. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data.

    PubMed

    Zhang, Jingyi; Li, Bin; Chen, Yumin; Chen, Meijie; Fang, Tao; Liu, Yongfeng

    2018-06-11

    This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF) method to estimate ground PM 2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR) models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM 2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM 2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM 2.5 analysis and prediction.

  8. Re'class'ification of 'quant'ified classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2009-12-01

    We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.

  9. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Advanced spectrophotometric chemometric methods for resolving the binary mixture of doxylamine succinate and pyridoxine hydrochloride.

    PubMed

    Katsarov, Plamen; Gergov, Georgi; Alin, Aylin; Pilicheva, Bissera; Al-Degs, Yahya; Simeonov, Vasil; Kassarova, Margarita

    2018-03-01

    The prediction power of partial least squares (PLS) and multivariate curve resolution-alternating least squares (MCR-ALS) methods have been studied for simultaneous quantitative analysis of the binary drug combination - doxylamine succinate and pyridoxine hydrochloride. Analysis of first-order UV overlapped spectra was performed using different PLS models - classical PLS1 and PLS2 as well as partial robust M-regression (PRM). These linear models were compared to MCR-ALS with equality and correlation constraints (MCR-ALS-CC). All techniques operated within the full spectral region and extracted maximum information for the drugs analysed. The developed chemometric methods were validated on external sample sets and were applied to the analyses of pharmaceutical formulations. The obtained statistical parameters were satisfactory for calibration and validation sets. All developed methods can be successfully applied for simultaneous spectrophotometric determination of doxylamine and pyridoxine both in laboratory-prepared mixtures and commercial dosage forms.

  11. Simultaneous determination of rifampicin, isoniazid and pyrazinamide in tablet preparations by multivariate spectrophotometric calibration.

    PubMed

    Goicoechea, H C; Olivieri, A C

    1999-08-01

    The use of multivariate spectrophotometric calibration is presented for the simultaneous determination of the active components of tablets used in the treatment of pulmonary tuberculosis. The resolution of ternary mixtures of rifampicin, isoniazid and pyrazinamide has been accomplished by using partial least squares (PLS-1) regression analysis. Although the components show an important degree of spectral overlap, they have been simultaneously determined with high accuracy and precision, rapidly and with no need of nonaqueous solvents for dissolving the samples. No interference has been observed from the tablet excipients. A comparison is presented with the related multivariate method of classical least squares (CLS) analysis, which is shown to yield less reliable results due to the severe spectral overlap among the studied compounds. This is highlighted in the case of isoniazid, due to the small absorbances measured for this component.

  12. Application of seemingly unrelated regression in medical data with intermittently observed time-dependent covariates.

    PubMed

    Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam

    2012-01-01

    BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.

  13. An Update on Statistical Boosting in Biomedicine.

    PubMed

    Mayr, Andreas; Hofner, Benjamin; Waldmann, Elisabeth; Hepp, Tobias; Meyer, Sebastian; Gefeller, Olaf

    2017-01-01

    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.

  14. Integrating Milk Metabolite Profile Information for the Prediction of Traditional Milk Traits Based on SNP Information for Holstein Cows

    PubMed Central

    Melzer, Nina; Wittenburg, Dörte; Repsilber, Dirk

    2013-01-01

    In this study the benefit of metabolome level analysis for the prediction of genetic value of three traditional milk traits was investigated. Our proposed approach consists of three steps: First, milk metabolite profiles are used to predict three traditional milk traits of 1,305 Holstein cows. Two regression methods, both enabling variable selection, are applied to identify important milk metabolites in this step. Second, the prediction of these important milk metabolite from single nucleotide polymorphisms (SNPs) enables the detection of SNPs with significant genetic effects. Finally, these SNPs are used to predict milk traits. The observed precision of predicted genetic values was compared to the results observed for the classical genotype-phenotype prediction using all SNPs or a reduced SNP subset (reduced classical approach). To enable a comparison between SNP subsets, a special invariable evaluation design was implemented. SNPs close to or within known quantitative trait loci (QTL) were determined. This enabled us to determine if detected important SNP subsets were enriched in these regions. The results show that our approach can lead to genetic value prediction, but requires less than 1% of the total amount of (40,317) SNPs., significantly more important SNPs in known QTL regions were detected using our approach compared to the reduced classical approach. Concluding, our approach allows a deeper insight into the associations between the different levels of the genotype-phenotype map (genotype-metabolome, metabolome-phenotype, genotype-phenotype). PMID:23990900

  15. The application of neural network model to the simulation nitrous oxide emission in the hydro-fluctuation belt of Three Gorges Reservoir

    NASA Astrophysics Data System (ADS)

    Song, Lanlan

    2017-04-01

    Nitrous oxide is much more potent greenhouse gas than carbon dioxide. However, the estimation of N2O flux is usually clouded with uncertainty, mainly due to high spatial and temporal variations. This hampers the development of general mechanistic models for N2O emission as well, as most previously developed models were empirical or exhibited low predictability with numerous assumptions. In this study, we tested General Regression Neural Networks (GRNN) as an alternative to classic empirical models for simulating N2O emission in riparian zones of Reservoirs. GRNN and nonlinear regression (NLR) were applied to estimate the N2O flux of 1-year observations in riparian zones of Three Gorge Reservoir. NLR resulted in lower prediction power and higher residuals compared to GRNN. Although nonlinear regression model estimated similar average values of N2O, it could not capture the fluctuation patterns accurately. In contrast, GRNN model achieved a fairly high predictability, with an R2 of 0.59 for model validation, 0.77 for model calibration (training), and a low root mean square error (RMSE), indicating a high capacity to simulate the dynamics of N2O flux. According to a sensitivity analysis of the GRNN, nonlinear relationships between input variables and N2O flux were well explained. Our results suggest that the GRNN developed in this study has a greater performance in simulating variations in N2O flux than nonlinear regressions.

  16. Direct Breakthrough Curve Prediction From Statistics of Heterogeneous Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Haslauer, Claus P.; Cirpka, Olaf A.; Vesselinov, Velimir V.

    2018-01-01

    This paper presents a methodology to predict the shape of solute breakthrough curves in heterogeneous aquifers at early times and/or under high degrees of heterogeneity, both cases in which the classical macrodispersion theory may not be applicable. The methodology relies on the observation that breakthrough curves in heterogeneous media are generally well described by lognormal distributions, and mean breakthrough times can be predicted analytically. The log-variance of solute arrival is thus sufficient to completely specify the breakthrough curves, and this is calibrated as a function of aquifer heterogeneity and dimensionless distance from a source plane by means of Monte Carlo analysis and statistical regression. Using the ensemble of simulated groundwater flow and solute transport realizations employed to calibrate the predictive regression, reliability estimates for the prediction are also developed. Additional theoretical contributions include heuristics for the time until an effective macrodispersion coefficient becomes applicable, and also an expression for its magnitude that applies in highly heterogeneous systems. It is seen that the results here represent a way to derive continuous time random walk transition distributions from physical considerations rather than from empirical field calibration.

  17. A survival tree method for the analysis of discrete event times in clinical and epidemiological studies.

    PubMed

    Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard

    2016-02-28

    Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.

  18. The complexity of classical music networks

    NASA Astrophysics Data System (ADS)

    Rolla, Vitor; Kestenberg, Juliano; Velho, Luiz

    2018-02-01

    Previous works suggest that musical networks often present the scale-free and the small-world properties. From a musician's perspective, the most important aspect missing in those studies was harmony. In addition to that, the previous works made use of outdated statistical methods. Traditionally, least-squares linear regression is utilised to fit a power law to a given data set. However, according to Clauset et al. such a traditional method can produce inaccurate estimates for the power law exponent. In this paper, we present an analysis of musical networks which considers the existence of chords (an essential element of harmony). Here we show that only 52.5% of music in our database presents the scale-free property, while 62.5% of those pieces present the small-world property. Previous works argue that music is highly scale-free; consequently, it sounds appealing and coherent. In contrast, our results show that not all pieces of music present the scale-free and the small-world properties. In summary, this research is focused on the relationship between musical notes (Do, Re, Mi, Fa, Sol, La, Si, and their sharps) and accompaniment in classical music compositions. More information about this research project is available at https://eden.dei.uc.pt/~vitorgr/MS.html.

  19. Clinical severity and quality of life in children and adolescents with Rett syndrome

    PubMed Central

    Lane, J.B.; Lee, H.-S.; Smith, L.W.; Cheng, P.; Glaze, D.G.; Neul, J.L.; Motil, K.J.; Barrish, J.O.; Skinner, S.A.; Annese, F.; McNair, L.; Graham, J.; Khwaja, O.; Barnes, K.; Krischer, J.P.

    2011-01-01

    Objective: The clinical features and genetics of Rett syndrome (RTT) have been well studied, but examination of quality of life (QOL) is limited. This study describes the impact of clinical severity on QOL among female children and adolescents with classic RTT. Methods: Cross-sectional and longitudinal analyses were conducted on data collected from an NIH-sponsored RTT natural history study. More than 200 participants from 5 to 18 years of age with classic RTT finished their 2-year follow-up at the time of analysis. Regression models after adjustment for their MECP2 mutation type and age at enrollment were used to examine the association between clinical status and QOL. Results: Severe clinical impairment was highly associated with poor physical QOL, but worse motor function and earlier age at onset of RTT stereotypies were associated with better psychosocial QOL; conversely, better motor function was associated with poorer psychosocial QOL. Conclusions: Standard psychosocial QOL assessment for children and adolescents with RTT differs significantly with regard to their motor function severity. As clinical trials in RTT emerge, the Child Health Questionnaire 50 may represent one of the important outcome measures. PMID:22013176

  20. How Relevant Are GFAP Autoantibodies in Autism and Tourette Syndrome?

    ERIC Educational Resources Information Center

    Kirkman, Nikki J.; Libbey, Jane E.; Sweeten, Thayne L.; Coon, Hilary H.; Miller, Judith N.; Stevenson, Edward K.; Lainhart, Janet E.; McMahon, William M.; Fujinami, Robert S.

    2008-01-01

    Controversy exists over the role of autoantibodies to central nervous system antigens in autism and Tourette Syndrome. We investigated plasma autoantibody titers to glial fibrillary acidic protein (GFAP) in children with classic onset (33) and regressive onset (26) autism, controls (25, healthy age- and gender-matched) and individuals with…

  1. Predictive value of magnetic resonance for identifying neurovascular compressions in trigeminal neuralgia.

    PubMed

    Ruiz-Juretschke, F; Guzmán-de-Villoria, J G; García-Leal, R; Sañudo, J R

    2017-05-23

    Microvascular decompression (MVD) is accepted as the only aetiological surgical treatment for refractory classic trigeminal neuralgia (TN). There is therefore increasing interest in establishing the diagnostic and prognostic value of identifying neurovascular compressions (NVC) using preoperative high-resolution three-dimensional magnetic resonance (MRI) in patients with classic TN who are candidates for surgery. This observational study includes a series of 74 consecutive patients with classic TN treated with MVD. All patients underwent a preoperative three-dimensional high-resolution MRI with DRIVE sequences to diagnose presence of NVC, as well as the degree, cause, and location of compressions. MRI results were analysed by doctors blinded to surgical findings and subsequently compared to those findings. After a minimum follow-up time of six months, we assessed the surgical outcome and graded it on the Barrow Neurological Institute pain intensity score (BNI score). The prognostic value of the preoperative MRI was estimated using binary logistic regression. Preoperative DRIVE MRI sequences showed a sensitivity of 95% and a specificity of 87%, with a 98% positive predictive value and a 70% negative predictive value. Moreover, Cohen's kappa (CK) indicated a good level of agreement between radiological and surgical findings regarding presence of NVC (CK 0.75), type of compression (CK 0.74) and the site of compression (CK 0.72), with only moderate agreement as to the degree of compression (CK 0.48). After a mean follow-up of 29 months (range 6-100 months), 81% of the patients reported pain control with or without medication (BNI score i-iiiI). Patients with an excellent surgical outcome, i.e. without pain and off medication (BNI score i), made up 66% of the total at the end of follow-up. Univariate analysis using binary logistic regression showed that a diagnosis of NVC on the preoperative MRI was a favorable prognostic factor that significantly increased the odds of obtaining an excellent outcome (OR 0.17, 95% CI 0.04-0.72; P=.02) or an acceptable outcome (OR 0.16, 95% CI 0.04-0.68; P=.01) after MVD. DRIVE MRI shows high sensitivity and specificity for diagnosing NVC in patients with refractory classic TN and who are candidates for MVD. The finding of NVC on preoperative MRI is a good prognostic factor for long-term pain relief with MVD. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Normative data for uterine size according to age and gravidity and possible role of the classical golden ratio.

    PubMed

    Verguts, J; Ameye, L; Bourne, T; Timmerman, D

    2013-12-01

    To document normal measurements (length, width, anteroposterior (AP) diameter) and proportions of the non-pregnant uterus according to age and gravidity. We hypothesized that uterine proportions conform to the classical 'golden ratio' (1.618). This was a retrospective study of ultrasonographic measurements of the length, width and AP diameter of non-pregnant uteri recorded in our database between 1 January 2000 and 31 July 2012. All patients for whom abnormal findings were reported were excluded and only the first set of measurements for each patient was retained for analysis. Loess (local regression) analysis was performed using age and gravidity as explanatory variables. Measurements of 5466 non-pregnant uteri were retrieved for analysis. The mean length was found to increase to 72 mm at the age of 40 and decrease to 42 mm at the age of 80 years. Gravidity was associated with greater uterine length, width and AP diameter. Mean length/width ratio was found to be 1.857 at birth, decreasing to 1.452 at the age of 91 years. At the age of 21 years, the mean ratio was found to be 1.618, i.e. equal to the golden ratio. Increasing gravidity was associated with lower mean length/width ratio. Uterine size in non-pregnant women varies in relation to age and gravidity. Mean length/width ratio conformed to the golden ratio at the age of 21, coinciding with peak fertility. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  3. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    NASA Astrophysics Data System (ADS)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.

  4. Real-time Supervised Detection of Pink Areas in Dermoscopic Images of Melanoma: Importance of Color Shades, Texture and Location

    PubMed Central

    Kaur, Ravneet; Albano, Peter P.; Cole, Justin G.; Hagerty, Jason; LeAnder, Robert W.; Moss, Randy H.; Stoecker, William V.

    2015-01-01

    Background/Purpose Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Methods Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Results Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Conclusion Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. PMID:25809473

  5. Real-time supervised detection of pink areas in dermoscopic images of melanoma: importance of color shades, texture and location.

    PubMed

    Kaur, R; Albano, P P; Cole, J G; Hagerty, J; LeAnder, R W; Moss, R H; Stoecker, W V

    2015-11-01

    Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue, and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  7. Generalized classical and quantum signal theories

    NASA Astrophysics Data System (ADS)

    Rundblad, E.; Labunets, V.; Novak, P.

    2005-05-01

    In this paper we develop two topics and show their inter- and cross-relation. The first centers on general notions of the generalized classical signal theory on finite Abelian hypergroups. The second concerns the generalized quantum hyperharmonic analysis of quantum signals (Hermitean operators associated with classical signals). We study classical and quantum generalized convolution hypergroup algebras of classical and quantum signals.

  8. Introducing Chemometrics to the Analytical Curriculum: Combining Theory and Lab Experience

    ERIC Educational Resources Information Center

    Gilbert, Michael K.; Luttrell, Robert D.; Stout, David; Vogt, Frank

    2008-01-01

    Beer's law is an ideal technique that works only in certain situations. A method for dealing with more complex conditions needs to be integrated into the analytical chemistry curriculum. For that reason, the capabilities and limitations of two common chemometric algorithms, classical least squares (CLS) and principal component regression (PCR),…

  9. Comparing and Contrasting Neural Net Solutions to Classical Statistical Solutions.

    ERIC Educational Resources Information Center

    Van Nelson, C.; Neff, Kathryn J.

    Data from two studies in which subjects were classified as successful or unsuccessful were analyzed using neural net technology after being analyzed with a linear regression function. Data were obtained from admission records of 201 students admitted to undergraduate and 285 students admitted to graduate programs. Data included grade point…

  10. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  11. Enhancement of partial robust M-regression (PRM) performance using Bisquare weight function

    NASA Astrophysics Data System (ADS)

    Mohamad, Mazni; Ramli, Norazan Mohamed; Ghani@Mamat, Nor Azura Md; Ahmad, Sanizah

    2014-09-01

    Partial Least Squares (PLS) regression is a popular regression technique for handling multicollinearity in low and high dimensional data which fits a linear relationship between sets of explanatory and response variables. Several robust PLS methods are proposed to accommodate the classical PLS algorithms which are easily affected with the presence of outliers. The recent one was called partial robust M-regression (PRM). Unfortunately, the use of monotonous weighting function in the PRM algorithm fails to assign appropriate and proper weights to large outliers according to their severity. Thus, in this paper, a modified partial robust M-regression is introduced to enhance the performance of the original PRM. A re-descending weight function, known as Bisquare weight function is recommended to replace the fair function in the PRM. A simulation study is done to assess the performance of the modified PRM and its efficiency is also tested in both contaminated and uncontaminated simulated data under various percentages of outliers, sample sizes and number of predictors.

  12. Classical and sequential limit analysis revisited

    NASA Astrophysics Data System (ADS)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  13. Music performance anxiety in young musicians: comparison of playing classical or popular music.

    PubMed

    Nusseck, Manfred; Zander, Mark; Spahn, Claudia

    2015-03-01

    Music performance anxiety (MPA) is an issue frequently experienced by musicians. It occurs not only in experienced musicians but also in children and adolescents. Furthermore, most research on MPA has been done with musicians who specialized in classical music. This study investigated the development of MPA across the ages in young musicians focusing on the classical and popular genres. In a cross-sectional survey, 239 students at German music schools, aged between 7 and 20 yrs, were asked about their perceived MPA and musical background. The data were analyzed according to musical genre and age. Multiple regression analyses were performed to investigate the influences of musical experiences on MPA. The analyses yielded high levels of MPA for classical musicians between 7 and 16 yrs, which was reduced in older students; for popular musicians, low MPA was seen in the younger (7-11 yrs) and high MPA in the older (16+ yrs) musicians. MPA was influenced by gender and the number of performances in the classical music group and only by gender and age in the popular music group. The results showed clear different trends for the development of MPA between musical genres that should be taken into account for educational aspects in musical training.

  14. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  15. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  16. Dynamic Network-Based Epistasis Analysis: Boolean Examples

    PubMed Central

    Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.

    2011-01-01

    In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556

  17. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  18. Contextual determinants of neonatal mortality using two analysis methods, Rio Grande do Sul, Brazil.

    PubMed

    Zanini, Roselaine Ruviaro; Moraes, Anaelena Bragança de; Giugliani, Elsa Regina Justo; Riboldi, João

    2011-02-01

    To analyze neonatal mortality determinants using multilevel logistic regression and classic hierarchical models. Cohort study including 138,407 live births with birth certificates and 1,134 neonatal deaths recorded in 2003, in the state of Rio Grande do Sul, Southern Brazil. The Information System on Live Births and mortality records were linked for gathering information on individual-level exposures. Sociodemographic data and information on the pregnancy, childbirth care and characteristics of the children at birth were collected. The associated factors were estimated and compared by traditional and multilevel logistic regression analysis. The neonatal mortality rate was 8.19 deaths per 1,000 live births. Low birth weight, 1- and 5-minute Apgar score below eight, congenital malformation, pre-term birth and previous fetal loss were associated with neonatal death in the traditional model. Elective cesarean section had a protective effect. Previous fetal loss did not remain significant in the multilevel model, but the inclusion of a contextual variable (poverty rate) showed that 15% of neonatal mortality variation can be explained by varying poverty rates in the microregions. The use of multilevel models showed a small effect of contextual determinants on the neonatal mortality rate. There was found a positive association with the poverty rate in the general model, and the proportion of households with water supply among preterm newborns.

  19. Multivariate Analysis As a Support for Diagnostic Flowcharts in Allergic Bronchopulmonary Aspergillosis: A Proof-of-Concept Study.

    PubMed

    Vitte, Joana; Ranque, Stéphane; Carsin, Ania; Gomez, Carine; Romain, Thomas; Cassagne, Carole; Gouitaa, Marion; Baravalle-Einaudi, Mélisande; Bel, Nathalie Stremler-Le; Reynaud-Gaubert, Martine; Dubus, Jean-Christophe; Mège, Jean-Louis; Gaudart, Jean

    2017-01-01

    Molecular-based allergy diagnosis yields multiple biomarker datasets. The classical diagnostic score for allergic bronchopulmonary aspergillosis (ABPA), a severe disease usually occurring in asthmatic patients and people with cystic fibrosis, comprises succinct immunological criteria formulated in 1977: total IgE, anti- Aspergillus fumigatus ( Af ) IgE, anti- Af "precipitins," and anti- Af IgG. Progress achieved over the last four decades led to multiple IgE and IgG(4) Af biomarkers available with quantitative, standardized, molecular-level reports. These newly available biomarkers have not been included in the current diagnostic criteria, either individually or in algorithms, despite persistent underdiagnosis of ABPA. Large numbers of individual biomarkers may hinder their use in clinical practice. Conversely, multivariate analysis using new tools may bring about a better chance of less diagnostic mistakes. We report here a proof-of-concept work consisting of a three-step multivariate analysis of Af IgE, IgG, and IgG4 biomarkers through a combination of principal component analysis, hierarchical ascendant classification, and classification and regression tree multivariate analysis. The resulting diagnostic algorithms might show the way for novel criteria and improved diagnostic efficiency in Af -sensitized patients at risk for ABPA.

  20. Associations between self-harm and distinct types of impulsivity

    PubMed Central

    Chamberlain, Samuel R; Leppink, Eric W.; Redden, Sarah A.; Grant, Jon E.

    2017-01-01

    Objective Self-harm is common and is of considerable public health concern. There is an ongoing debate regarding how self-harm should be classified. The aim of this study was to characterize associations between self-harm and impulsivity, including from the perspective of formal mental disorders and neuropsychological functioning. Method Total 333 adults (mean [SD] age 22.6 (3.6) years, 61% male) were recruited from the general community, and undertook detailed clinical and cognitive assessments. History of self-harm was quantified using the Self-Harm Inventory (SHI), which asks about 22 self-harm behaviors (classic self-harm behaviors as well as broader types of behavior that may be relevant, such as engaging in emotionally abusive relationships). Principal components analysis was used to identify latent dimensions of self-harming behaviors. Relationships between self-harm dimensions and other measures were characterized using ordinary least squares regression. Results Principal Components Analysis yielded a three factor solution, corresponding to self-injurious self-harm (e.g. cutting, overdoses, burning), interpersonal related self-harm (e.g. engaging in emotionally or sexually abusive relationships), and reckless self-harm (e.g. losing one’s job deliberately, driving recklessly, abusing alcohol). Regression modelling showed that all three dimensions of self-harm were associated with lower quality of life. Classic and interpersonal self-harm dimensions were associated with impulse control disorders (ICDs) whereas reckless self-harm was associated with other mainstream mental disorders besides ICDs. Only interpersonal self-harm was significantly associated with other impulsive measures ( less risk adjustment on the Cambridge Gambling Task). Conclusions This study suggests the existence of three distinct subtypes or ‘latent factors’ of self-harm: all three appear clinically important in that they are linked with worse quality of life. Clinicians should screen for impulse control disorders in people presenting with self-harm, especially when it is self-injurious or involves interpersonal harm. Our findings militate against self-harm being broadly associated with impulsive personality and cognitive measures, at least in people recruited from a non-clinical / non-treatment setting. If future nosological revisions and treatment trials focus on self-injurious self-harm alone, they may overlook other aspects of self-harm that are also functionally impairing. PMID:28135642

  1. Parameterization of phosphine ligands demonstrates enhancement of nickel catalysis via remote steric effects.

    PubMed

    Wu, Kevin; Doyle, Abigail G

    2017-08-01

    The field of Ni-catalysed cross-coupling has seen rapid recent growth because of the low cost of Ni, its earth abundance, and its ability to promote unique cross-coupling reactions. Whereas advances in the related field of Pd-catalysed cross-coupling have been driven by ligand design, the development of ligands specifically for Ni has received minimal attention. Here, we disclose a class of phosphines that enable the Ni-catalysed Csp 3 Suzuki coupling of acetals with boronic acids to generate benzylic ethers, a reaction that failed with known ligands for Ni and designer phosphines for Pd. Using parameters to quantify phosphine steric and electronic properties together with regression statistical analysis, we identify a model for ligand success. The study suggests that effective phosphines feature remote steric hindrance, a concept that could guide future ligand design tailored to Ni. Our analysis also reveals that two classic descriptors for ligand steric environment-cone angle and % buried volume-are not equivalent, despite their treatment in the literature.

  2. Parameterization of phosphine ligands demonstrates enhancement of nickel catalysis via remote steric effects

    NASA Astrophysics Data System (ADS)

    Wu, Kevin; Doyle, Abigail G.

    2017-08-01

    The field of Ni-catalysed cross-coupling has seen rapid recent growth because of the low cost of Ni, its earth abundance, and its ability to promote unique cross-coupling reactions. Whereas advances in the related field of Pd-catalysed cross-coupling have been driven by ligand design, the development of ligands specifically for Ni has received minimal attention. Here, we disclose a class of phosphines that enable the Ni-catalysed Csp3 Suzuki coupling of acetals with boronic acids to generate benzylic ethers, a reaction that failed with known ligands for Ni and designer phosphines for Pd. Using parameters to quantify phosphine steric and electronic properties together with regression statistical analysis, we identify a model for ligand success. The study suggests that effective phosphines feature remote steric hindrance, a concept that could guide future ligand design tailored to Ni. Our analysis also reveals that two classic descriptors for ligand steric environment—cone angle and % buried volume—are not equivalent, despite their treatment in the literature.

  3. Expression of Fas, FasL, caspase-8 and other factors of the extrinsic apoptotic pathway during the onset of interdigital tissue elimination.

    PubMed

    Svandova, E Budisova; Vesela, B; Lesot, H; Poliard, A; Matalova, E

    2017-04-01

    Elimination of the interdigital web is considered to be the classical model for assessing apoptosis. So far, most of the molecules described in the process have been connected to the intrinsic (mitochondrial) pathway. The extrinsic (receptor mediated) apoptotic pathway has been rather neglected, although it is important in development, immunomodulation and cancer therapy. This work aimed to investigate factors of the extrinsic apoptotic machinery during interdigital regression with a focus on three crucial initiators: Fas, Fas ligand and caspase-8. Immunofluorescent analysis of mouse forelimb histological sections revealed abundant expression of these molecules prior to digit separation. Subsequent PCR Array analyses indicated the expression of several markers engaged in the extrinsic pathway. Between embryonic days 11 and 13, statistically significant increases in the expression of Fas and caspase-8 were observed, along with other molecules involved in the extrinsic apoptotic pathway such as Dapk1, Traf3, Tnsf12, Tnfrsf1A and Ripk1. These results demonstrate for the first time the presence of extrinsic apoptotic components in mouse limb development and indicate novel candidates in the molecular network accompanying the regression of interdigital tissue during digitalisation.

  4. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  5. Analysis of obsidian from moho cay, belize: new evidence on classic maya trade routes.

    PubMed

    Healy, P F; McKillop, H I; Walsh, B

    1984-07-27

    Trace element analysis of obsidian artifacts from Moho Cay, Belize, reveals that the obsidian derives primarily from the El Chayal outcrop in highland Guatemala and not from the Ixtepeque source. This is contrary to the widely accepted obsidian trade route model for Classic Maya civilization and suggests that Classic Maya obsidian trade was a more complex economic phenomenon than has been recognized.

  6. Relative velocity change measurement based on seismic noise analysis in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubuq, D.

    2011-12-01

    Passive monitoring techniques based on noise cross-correlation analysis are still debated in exploration geophysics even if recent studies showed impressive performance in seismology at larger scale. Time evolution of complex geological structure using noise data includes localization of noise sources and measurement of relative velocity variations. Monitoring relative velocity variations only requires the measurement of phase shifts of seismic noise cross-correlation functions computed for successive time recordings. The existing algorithms, such as the Stretching and the Doublet, classically need great efforts in terms of computation time, making them not practical when continuous dataset on dense arrays are acquired. We present here an innovative technique for passive monitoring based on the measure of the instantaneous phase of noise-correlated signals. The Instantaneous Phase Variation (IPV) technique aims at cumulating the advantages of the Stretching and Doublet methods while proposing a faster measurement of the relative velocity change. The IPV takes advantage of the Hilbert transform to compute in the time domain the phase difference between two noise correlation functions. The relative velocity variation is measured through the slope of the linear regression of the phase difference curve as a function of correlation time. The large amount of noise correlation functions, classically available at exploration scale on dense arrays, allows for a statistical analysis that further improves the precision of the estimation of the velocity change. In this work, numerical tests first aim at comparing the IPV performance to the Stretching and Doublet techniques in terms of accuracy, robustness and computation time. Then experimental results are presented using a seismic noise dataset with five days of continuous recording on 397 geophones spread on a ~1 km-squared area.

  7. Classical methods and modern analysis for studying fungal diversity

    Treesearch

    John Paul Schmit

    2005-01-01

    In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...

  8. Classical Methods and Modern Analysis for Studying Fungal Diversity

    Treesearch

    J. P. Schmit; D. J. Lodge

    2005-01-01

    In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...

  9. Atypical depressive symptoms and obesity in a national sample of older adults with major depressive disorder.

    PubMed

    Chou, Kee-Lee; Yu, Kar-Ming

    2013-06-01

    The objectives of this study are to present findings on the rate of obesity associated with classic, atypical, and undifferentiated depression by comparing with those without depression in a nationally representative sample of United States older adults. The authors used data from the 2001 to 2002 National Epidemiologic Survey of Alcohol and Related Conditions (NESARC), which included 10,557 adults 60 years of age and older. Chi-square tests were used to compare classic, atypical, and undifferentiated as well as nondepressed control in sociodemographic characteristics. Then, logistic regressions adjusting for sociodemographic characteristics were used to evaluate associations of rate of current obesity (defined as Body Mass Index (BMI) > 30) across the three depressive groups (classic, atypical, and undifferentiated depression) and nondepressed control. Lifetime, current, and past depression were examined. Significant differences were found between atypical and classic depression in sex, age, marital status, race, and personal income. After adjusting for sex, age, marital status, race, and personal income, the rate of obesity was significantly greater for respondents with atypical depression than respondents with classic, undifferentiated depression, or without depression. Same results were found in lifetime, current, and past depression. Our findings suggest that the heterogeneity of depression should be considered when examining the effect of depression on obesity in old age. Prevention measures should be designed and delivered to older adults with atypical depression. © 2013 Wiley Periodicals, Inc.

  10. Morphological Idiosyncracies in Classical Arabic: Evidence Favoring Lexical Representations over Rules.

    ERIC Educational Resources Information Center

    Miller, Ann M.

    A lexical representational analysis of Classical Arabic is proposed that captures a generalization that McCarthy's (1979, 1981) autosegmental analysis misses, namely that idiosyncratic characteristics of the derivational binyanim in Arabic are lexical, not morphological. This analysis captures that generalization by treating all the idiosyncracies…

  11. Ultrasound-assisted extraction of bioactive compounds from lemon balm and peppermint leaves

    NASA Astrophysics Data System (ADS)

    Šic Žlabur, Jana; Voća, Sandra; Dobričević, Nadica; Pliestić, Stjepan; Galić, Ante; Boričević, Ana; Borić, Nataša

    2016-01-01

    The aim of this study was to investigate the influence of conventional and ultrasound-assisted extraction (frequency, time, temperature) on the content of bioactive compounds as well as on the antioxidant activity of aqueous extracts from fresh lemon balm and peppermint leaves. Total phenols, flavonoids, non-flavonoids, total chlorophylls, total carotenoids, and radical scavenging capacity were determined. Moreover, the relationship between bioactive compounds and antioxidant capacity was studied by linear regression. A significant increase in all studied bioactive compounds during ultrasonic extraction for 5 to 20 min was found. With the classical extraction method, the highest amounts of total phenols, flavonoids, and antioxidant activity were determined, and the maximum amounts of total chlorophylls and carotenoids were determined during 20 min ultrasonic extraction. The correlation analysis revealed a strong, positive relationship between antioxidant activity and total phenolic compounds.

  12. Socioeconomic Distinction, Cultural Tastes, and Cigarette Smoking*

    PubMed Central

    Pampel, Fred C.

    2011-01-01

    Objectives The inverse relationship between socioeconomic status (SES) and smoking is typically seen in terms of the greater economic and social resources of advantaged groups, but it may also relate to cultural resources. This study aims to test theories of symbolic distinction by examining relationships between smoking and ostensibly unrelated cultural preferences. Methods Using the 1993 General Social Survey, ordinal logistic regression models, and a three-category dependent variable (never, former, and current smoker), the analysis estimates relationships of musical likes and dislikes with smoking while controlling for SES and social strain. Results Preferences for classical music are associated with lower smoking, while preferences for bluegrass, jazz, and heavy metal music are associated with higher smoking. Conclusions The results suggest that SES groups may use smoking, like other cultural tastes, to distinguish their lifestyles from those of others. PMID:21874073

  13. VARSEDIG: an algorithm for morphometric characters selection and statistical validation in morphological taxonomy.

    PubMed

    Guisande, Cástor; Vari, Richard P; Heine, Jürgen; García-Roselló, Emilio; González-Dacosta, Jacinto; Perez-Schofield, Baltasar J García; González-Vilas, Luis; Pelayo-Villamil, Patricia

    2016-09-12

    We present and discuss VARSEDIG, an algorithm which identifies the morphometric features that significantly discriminate two taxa and validates the morphological distinctness between them via a Monte-Carlo test. VARSEDIG is freely available as a function of the RWizard application PlotsR (http://www.ipez.es/RWizard) and as R package on CRAN. The variables selected by VARSEDIG with the overlap method were very similar to those selected by logistic regression and discriminant analysis, but overcomes some shortcomings of these methods. VARSEDIG is, therefore, a good alternative by comparison to current classical classification methods for identifying morphometric features that significantly discriminate a taxon and for validating its morphological distinctness from other taxa. As a demonstration of the potential of VARSEDIG for this purpose, we analyze morphological discrimination among some species of the Neotropical freshwater family Characidae.

  14. Partial Least Squares with Structured Output for Modelling the Metabolomics Data Obtained from Complex Experimental Designs: A Study into the Y-Block Coding.

    PubMed

    Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston

    2016-10-28

    Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.

  15. Bispectral analysis during deep sedation of pediatric oral surgery patients.

    PubMed

    Overly, Frank L; Wright, Robert O; Connor, Francis A; Jay, Gregory D; Linakis, James G

    2005-02-01

    Bispectral (BIS) analysis uses electroencephalogram information from a forehead electrode to calculate an index score (0 to 100; 0 = coma; 90 to 100 = awake). This index score correlates with the level of alertness in anesthetized patients. Classically, sedation has been monitored with clinical sedation scales such as the Observers Assessment of Alertness Sedation Scale (OAA/S), Modified Ramsey Scale, or a Visual Analog Scale (VAS). Our objective was to determine the correlation between clinical sedation scales and BIS index in pediatric patients undergoing sedation in an outpatient oral surgery setting. Prospective cohort study of patients aged 2 to 17 years undergoing sedation in an outpatient oral surgery office. Sedation was performed in the customary manner with the addition of BIS monitoring. Three clinical sedation scores (OAA/S: 5 to 1; 5 = awake, 1 = unresponsive; Modified Ramsey: 1 to 6; 1-2 = awake, 6 = unresponsive; VAS: 0 to 10; 0 = awake, 10 = unresponsive) were assigned every 5 minutes by an investigator blinded to the BIS index. Data were analyzed using a repeated measures linear regression model. Sixteen subjects undergoing oral surgery, ages 4.5 years to 17 years, were enrolled, mean age 12.6 years +/- 4.3 years (standard deviation). Patients received methohexital in addition to 1 or more of the following: nitrous oxide, fentanyl, or midazolam. The results of the longitudinal regression analysis showed a highly significant association between the sedation scales and the BIS index. The BIS monitor may be a useful adjunct in monitoring pediatric patients receiving sedation in the outpatient setting.

  16. [Predicting very early rebleeding after acute variceal bleeding based in classification and regression tree analysis (CRTA).].

    PubMed

    Altamirano, J; Augustin, S; Muntaner, L; Zapata, L; González-Angulo, A; Martínez, B; Flores-Arroyo, A; Camargo, L; Genescá, J

    2010-01-01

    Variceal bleeding (VB) is the main cause of death among cirrhotic patients. About 30-50% of early rebleeding is encountered few days after the acute episode of VB. It is necessary to stratify patients with high risk of very early rebleeding (VER) for more aggressive therapies. However, there are few and incompletely understood prognostic models for this purpose. To determine the risk factors associated with VER after an acute VB. Assessment and comparison of a novel prognostic model generated by Classification and Regression Tree Analysis (CART) with classic-used models (MELD and Child-Pugh [CP]). Sixty consecutive cirrhotic patients with acute variceal bleeding. CART analysis, MELD and Child-Pugh scores were performed at admission. Receiver operating characteristic (ROC) curves were constructed to evaluate the predictive performance of the models. Very early rebleeding rate was 13%. Variables associated with VER were: serum albumin (p = 0.027), creatinine (p = 0.021) and transfused blood units in the first 24 hrs (p = 0.05). The area under the ROC for MELD, CHILD-Pugh and CART were 0.46, 0.50 and 0.82, respectively. The value of cut analyzed by CART for the significant variables were: 1) Albumin 2.85 mg/dL, 2) Packed red cells 2 units and 3) Creatinine 1.65 mg/dL the ABC-ROC. Serum albumin, creatinine and number of transfused blood units were associated with VER. A simple CART algorithm combining these variables allows an accurate predictive assessment of VER after acute variceal bleeding. Key words: cirrhosis, variceal bleeding, esophageal varices, prognosis, portal hypertension.

  17. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  18. Interleukin-6 Level among Shift and Night Workers in Japan: Cross-Sectional Analysis of the J-HOPE Study.

    PubMed

    Amano, Hoichi; Fukuda, Yoshiharu; Yokoo, Takashi; Yamaoka, Kazue

    2018-03-27

    Shift workers have a high risk of cardiovascular disease (CVD). Systemic inflammation measured has been associated with the risk of CVD onset, in addition to classical risk factors. However, the association between work schedule and inflammatory cytokine levels remains unclear. The purpose of this study was to examine the association between work schedule and interleukin-6 (IL-6)/high-sensitivity C-reactive protein (hs-CRP) levels among Japanese workers. The present cross-sectional study was a part of the Japanese Study of Health, Occupation and Psychosocial Factors Related Equity (J-HOPE). A total of 5259 persons who measured inflammatory cytokine were analyzed in this study. One-way analysis of variance was used to test log-transformed IL-6/hs-CRP differences by work schedule. Multiple regression analysis was used to examine the difference adjusted for other possible CVD risk factors. There were 3660 participants who had a regular work schedule; the remaining schedules were shift work without night work for 181 participants, shift work with night work for 1276 participants, and only night work for 142 participants. The unadjusted model showed that only night workers were significantly related to high levels of IL-6 compared with regular workers. Even in the multiple regression analysis, the higher level of IL-6 among only night workers remained significant (β=0.058, P=0.01). On the contrary, hs-CRP was not. The present study revealed that only night shift work is significantly associated with high levels of IL-6 in Japanese workers. These observations help us understand the mechanism for the association between work schedule and CVD onset.

  19. Relations of Transtheoretical Model Stage, Self-Efficacy, and Voluntary Physical Activity in African American Preadolescents

    ERIC Educational Resources Information Center

    Annesi, James J.; Faigenbaum, Avery D.; Westcott, Wayne L.

    2010-01-01

    The transtheoretical model (TTM; Prochaska, DiClemente, & Norcross, 1992) suggests that, at any point, an individual is in one of five stages-of-change related to adopting a behavior. People sequentially advance in stage but may also maintain or even regress, based on personal and environmental factors (Nigg, 2005). A classic study published in…

  20. Artificial Neural Network approach to develop unique Classification and Raga identification tools for Pattern Recognition in Carnatic Music

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Parimala, Y. G.

    2011-12-01

    A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.

  1. [A comparative biology and dynamic interpretation of necrophilia].

    PubMed

    Krizek, G O; Lidinger, H G

    1993-04-01

    The authors discuss some aspects of Necrophilia based on classic Freudian instinctive theories. Parallels are seen between different levels of life development on Earth and the basic antagonistic drives (Eros and Thanatos) in an attempt to explain this rare and unusual paraphilia. The authors mention, that in unicellular organisms, e.g. Amebas and Schizomycetae, there often does not exist what could be called "the death of an individual", when by splitting two new "individual organisms" are created and no death as such occurs. Here the supposed antagonism of these basic drives-Eros and Thanatos-actually does not manitest itself in the concentional sense. Necrophilia could be interpreted as a regressive desire to return into a phylogenetically older stage of life development, where no individual dies, and life continues without interruption. (On the level of these unicellular organisms, we should speak about "Dividuum" instead of using the classic term "Individuum".) There is some analogy to this regressive desire at a higher level of the biological development in Human society--a magic conviction about the possibility to receive a dead person-common in preliterate cultures. Necrophilia represents an attempt at symbolic unification between antagonistic active and passive drive tendencies and between the Libido and the Destructive instinct.

  2. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  3. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  4. Generalized Onsager's reciprocal relations for the master and Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-06-01

    The Onsager's reciprocal relation plays a fundamental role in the nonequilibrium thermodynamics. However, unfortunately, its classical version is valid only within a narrow region near equilibrium due to the linear regression hypothesis, which largely restricts its usage. In this paper, based on the conservation-dissipation formalism, a generalized version of Onsager's relations for the master equations and Fokker-Planck equations was derived. Nonlinear constitutive relations with nonsymmetric and positively stable operators, which become symmetric under the detailed balance condition, constitute key features of this new generalization. Similar conclusions also hold for many other classical models in physics and chemistry, which in turn make the current study as a benchmark for the application of generalized Onsager's relations in nonequilibrium thermodynamics.

  5. Investigating light curve modulation via kernel smoothing. II. New additional modes in single-mode OGLE classical Cepheids

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Anderson, Richard I.

    2018-04-01

    Detailed knowledge of the variability of classical Cepheids, in particular their modulations and mode composition, provides crucial insight into stellar structure and pulsation. However, tiny modulations of the dominant radial-mode pulsation were recently found to be very frequent, possibly ubiquitous in Cepheids, which makes secondary modes difficult to detect and analyse, since these modulations can easily mask the potentially weak secondary modes. The aim of this study is to re-investigate the secondary mode content in the sample of OGLE-III and -IV single-mode classical Cepheids using kernel regression with adaptive kernel width for pre-whitening, instead of using a constant-parameter model. This leads to a more precise removal of the modulated dominant pulsation, and enables a more complete survey of secondary modes with frequencies outside a narrow range around the primary. Our analysis reveals that significant secondary modes occur more frequently among first overtone Cepheids than previously thought. The mode composition appears significantly different in the Large and Small Magellanic Clouds, suggesting a possible dependence on chemical composition. In addition to the formerly identified non-radial mode at P2 ≈ 0.6…0.65P1 (0.62-mode), and a cluster of modes with near-primary frequency, we find two more candidate non-radial modes. One is a numerous group of secondary modes with P2 ≈ 1.25P1, which may represent the fundamental of the 0.62-mode, supposed to be the first harmonic of an l ∈ {7, 8, 9} non-radial mode. The other new mode is at P2 ≈ 1.46P1, possibly analogous to a similar, rare mode recently discovered among first overtone RR Lyrae stars.

  6. Efficient occupancy model-fitting for extensive citizen-science data.

    PubMed

    Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists.

  7. Efficient occupancy model-fitting for extensive citizen-science data

    PubMed Central

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists. PMID:28328937

  8. Notes on power of normality tests of error terms in regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less

  9. Modal analysis of a classical guitar

    NASA Astrophysics Data System (ADS)

    Cohen, David; Rossing, Thomas D.

    2002-11-01

    Using holographic interferometry, we have determined the modes of vibration of a classical guitar (by the first author) having an asymmetrically-braced top plate and a crossed braced back of unique design. The vibrational modes and acoustical properties are compared with other classical guitars.

  10. Use of FTA® classic cards for epigenetic analysis of sperm DNA.

    PubMed

    Serra, Olga; Frazzi, Raffaele; Perotti, Alessio; Barusi, Lorenzo; Buschini, Annamaria

    2018-02-01

    FTA® technologies provide the most reliable method for DNA extraction. Although FTA technologies have been widely used for genetic analysis, there is no literature on their use for epigenetic analysis yet. We present for the first time, a simple method for quantitative methylation assessment based on sperm cells stored on Whatman FTA classic cards. Specifically, elution of seminal DNA from FTA classic cards was successfully tested with an elution buffer and an incubation step in a thermocycler. The eluted DNA was bisulfite converted, amplified by PCR, and a region of interest was pyrosequenced.

  11. Quasi-Static Analysis of Round LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of round LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.

  12. Quasi-Static Analysis of LaRC THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.

    2007-01-01

    An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.

  13. Unconditional analyses can increase efficiency in assessing gene-environment interaction of the case-combined-control design.

    PubMed

    Goldstein, Alisa M; Dondon, Marie-Gabrielle; Andrieu, Nadine

    2006-08-01

    A design combining both related and unrelated controls, named the case-combined-control design, was recently proposed to increase the power for detecting gene-environment (GxE) interaction. Under a conditional analytic approach, the case-combined-control design appeared to be more efficient and feasible than a classical case-control study for detecting interaction involving rare events. We now propose an unconditional analytic strategy to further increase the power for detecting gene-environment (GxE) interactions. This strategy allows the estimation of GxE interaction and exposure (E) main effects under certain assumptions (e.g. no correlation in E between siblings and the same exposure frequency in both control groups). Only the genetic (G) main effect cannot be estimated because it is biased. Using simulations, we show that unconditional logistic regression analysis is often more efficient than conditional analysis for detecting GxE interaction, particularly for a rare gene and strong effects. The unconditional analysis is also at least as efficient as the conditional analysis when the gene is common and the main and joint effects of E and G are small. Under the required assumptions, the unconditional analysis retains more information than does the conditional analysis for which only discordant case-control pairs are informative leading to more precise estimates of the odds ratios.

  14. Random Survival Forest in practice: a method for modelling complex metabolomics data in time to event analysis.

    PubMed

    Dietrich, Stefan; Floegel, Anna; Troll, Martina; Kühn, Tilman; Rathmann, Wolfgang; Peters, Anette; Sookthai, Disorn; von Bergen, Martin; Kaaks, Rudolf; Adamski, Jerzy; Prehn, Cornelia; Boeing, Heiner; Schulze, Matthias B; Illig, Thomas; Pischon, Tobias; Knüppel, Sven; Wang-Sattler, Rui; Drogan, Dagmar

    2016-10-01

    The application of metabolomics in prospective cohort studies is statistically challenging. Given the importance of appropriate statistical methods for selection of disease-associated metabolites in highly correlated complex data, we combined random survival forest (RSF) with an automated backward elimination procedure that addresses such issues. Our RSF approach was illustrated with data from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, with concentrations of 127 serum metabolites as exposure variables and time to development of type 2 diabetes mellitus (T2D) as outcome variable. Out of this data set, Cox regression with a stepwise selection method was recently published. Replication of methodical comparison (RSF and Cox regression) was conducted in two independent cohorts. Finally, the R-code for implementing the metabolite selection procedure into the RSF-syntax is provided. The application of the RSF approach in EPIC-Potsdam resulted in the identification of 16 incident T2D-associated metabolites which slightly improved prediction of T2D when used in addition to traditional T2D risk factors and also when used together with classical biomarkers. The identified metabolites partly agreed with previous findings using Cox regression, though RSF selected a higher number of highly correlated metabolites. The RSF method appeared to be a promising approach for identification of disease-associated variables in complex data with time to event as outcome. The demonstrated RSF approach provides comparable findings as the generally used Cox regression, but also addresses the problem of multicollinearity and is suitable for high-dimensional data. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  15. Psychosocial work factors, major depressive and generalised anxiety disorders: results from the French national SIP study.

    PubMed

    Murcia, Marie; Chastang, Jean-François; Niedhammer, Isabelle

    2013-04-25

    Anxiety and depression are prevalent mental disorders in working populations. The risk factors of these disorders are not completely well known. Developing knowledge on occupational risk factors for mental disorders appears crucial. This study investigates the association between various classical and emergent psychosocial work factors and major depressive and generalised anxiety disorders in the French working population. The study was based on a national random sample of 3765 men and 3944 women of the French working population (SIP 2006 survey). Major Depressive Disorder (MDD) and Generalised Anxiety Disorder (GAD) were measured using a standardised diagnostic interview (MINI). Occupational factors included psychosocial work factors as well as biomechanical, physical, and chemical exposures. Adjustment variables included age, occupation, marital status, social support, and life events. Multivariate analysis was performed using logistic regression analysis. Low decision latitude, overcommitment, and emotional demands were found to be risk factors for both MDD-GAD among both genders. Other risk factors were observed: high psychological demands, low reward, ethical conflict, and job insecurity, but differences were found according to gender and outcome. Significant interaction terms were observed suggesting that low decision latitude, high psychological demands, and job insecurity had stronger effects on mental disorders for men than for women. Given the cross-sectional study design, no causal conclusion could be drawn. This study showed significant associations between classical and emergent psychosocial work factors and MDD-GAD. Preventive actions targeting various psychosocial work factors, including emergent factors, may help to reduce mental disorders at the workplace. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Are classic predictors of voltage valid in cardiac amyloidosis? A contemporary analysis of electrocardiographic findings.

    PubMed

    Sperry, Brett W; Vranian, Michael N; Hachamovitch, Rory; Joshi, Hariom; McCarthy, Meghann; Ikram, Asad; Hanna, Mazen

    2016-07-01

    Low voltage electrocardiography (ECG) coupled with increased ventricular wall thickness is the hallmark of cardiac amyloidosis. However, patient characteristics influencing voltage in the general population, including bundle branch block, have not been evaluated in amyloid heart disease. A retrospective analysis was performed of patients with newly diagnosed cardiac amyloidosis from 2002 to 2014. ECG voltage was calculated using limb (sum of QRS complex in leads I, II and III) and precordial (Sokolow: S in V1 plus R in V5-V6) criteria. The associations between voltage and clinical variables were tested using multivariable linear regression. A Cox model assessed the association of voltage with mortality. In 389 subjects (transthyretin ATTR 186, light chain AL 203), 30% had conduction delay (QRS >120ms). In those with narrow QRS, 68% met low limb, 72% low Sokolow and 57% both criteria, with lower voltages found in AL vs ATTR. LV mass index as well as other typical factors that impact voltage (age, sex, race, hypertension, BSA, and smoking) in the general population were not associated with voltage in this cardiac amyloidosis cohort. Patients with LBBB and IVCD had similar voltages when compared to those with narrow QRS. Voltage was significantly associated with mortality (p<0.001 for both criteria) after multivariable adjustment. Classic predictors of ECG voltage in the general population are not valid in cardiac amyloidosis. In this cohort, the prevalence estimates of ventricular conduction delay and low voltage are higher than previously reported. Voltage predicts mortality after multivariable adjustment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Children with classic congenital adrenal hyperplasia have elevated serum leptin concentrations and insulin resistance: potential clinical implications.

    PubMed

    Charmandari, Evangelia; Weise, Martina; Bornstein, Stefan R; Eisenhofer, Graeme; Keil, Margaret F; Chrousos, George P; Merke, Deborah P

    2002-05-01

    Leptin is secreted by the white adipose tissue and modulates energy homeostasis. Nutritional, neural, neuroendocrine, paracrine, and autocrine factors, including the sympathetic nervous system and the adrenal medulla, have been implicated in the regulation of leptin secretion. Classic congenital adrenal hyperplasia (CAH) is characterized by a defect in cortisol and aldosterone secretion, impaired development and function of the adrenal medulla, and adrenal hyperandrogenism. To examine leptin secretion in patients with classic CAH in relation to their adrenomedullary function and insulin and androgen secretion, we studied 18 children with classic CAH (12 boys and 6 girls; age range 2-12 yr) and 28 normal children (16 boys and 12 girls; age range 5-12 yr) matched for body mass index (BMI). Serum leptin concentrations were significantly higher in patients with CAH than in control subjects (8.1 +/- 2.0 vs. 2.5 +/- 0.6 ng/ml, P = 0.01), and this difference persisted when leptin values were corrected for BMI. When compared with their normal counterparts, children with CAH had significantly lower plasma epinephrine (7.1 +/- 1.3 vs. 50.0 +/- 4.2, P < 0.001) and free metanephrine concentrations (18.4 +/- 2.4 vs. 46.5 +/- 4.0, P < 0.001) and higher fasting serum insulin (10.6 +/- 1.4 vs. 3.2 +/- 0.2 microU/ml, P < 0.001) and testosterone (23.7 +/- 5.3 vs. 4.6 +/- 0.5 ng/dl, P = 0.003) concentrations. Insulin resistance determined by the homeostasis model assessment method was significantly greater in children with classic CAH than in normal children (2.2 +/- 0.3 vs. 0.7 +/- 0.04, P < 0.001). Leptin concentrations were significantly and negatively correlated with epinephrine (r = -0.50, P = 0.001) and free metanephrine (r = -0.48, P = 0.002) concentrations. Stepwise multiple linear regression analysis indicated that serum leptin concentrations were best predicted by BMI in both patients and controls. Gender predicted serum leptin concentrations in controls but not in patients with classic CAH. No association was found between the dose of hydrocortisone and serum leptin (r = -0.17, P = 0.5) or insulin (r = 0.24, P = 0.3) concentrations in children with CAH. Our findings indicate that children with classic CAH have elevated fasting serum leptin and insulin concentrations, and insulin resistance. These most likely reflect differences in long-term adrenomedullary hypofunction and glucocorticoid therapy. Elevated leptin and insulin concentrations in patients with CAH may further enhance adrenal and ovarian androgen production, decrease the therapeutic efficacy of glucocorticoids, and contribute to later development of polycystic ovary syndrome and/or the metabolic syndrome and their complications.

  18. An A Priori Multiobjective Optimization Model of a Search and Rescue Network

    DTIC Science & Technology

    1992-03-01

    sequences. Classical sensitivity analysis and tolerance analysis were used to analyze the frequency assignments generated by the different weight...function for excess coverage of a frequency. Sensitivity analysis is used to investigate the robustness of the frequency assignments produced by the...interest. The linear program solution is used to produce classical sensitivity analysis for the weight ranges. 17 III. Model Formulation This chapter

  19. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  20. Importance of vital signs to the early diagnosis and severity of sepsis: association between vital signs and sequential organ failure assessment score in patients with sepsis.

    PubMed

    Kenzaka, Tsuneaki; Okayama, Masanobu; Kuroki, Shigehiro; Fukui, Miho; Yahata, Shinsuke; Hayashi, Hiroki; Kitao, Akihito; Sugiyama, Daisuke; Kajii, Eiji; Hashimoto, Masayoshi

    2012-01-01

    While much attention is given to the fifth vital sign, the utility of the 4 classic vital signs (blood pressure, respiratory rate, body temperature, and heart rate) has been neglected. The aim of this study was to assess a possible association between vital signs and the Sequential Organ Failure Assessment (SOFA) score in patients with sepsis. We performed a prospective, observational study of 206 patients with sepsis. Blood pressure, respiratory rate, body temperature, and heart rate were measured on arrival at the hospital. The SOFA score was also determined on the day of admission. Bivariate correlation analysis showed that all of the vital signs were correlated with the SOFA score. Multiple regression analysis indicated that decreased values of systolic blood pressure (multivariate regression coefficient [Coef] = -0.030, 95% confidence interval [CI] = -0.046 to -0.013) and diastolic blood pressure (Coef = -0.045, 95% CI = -0.070 to -0.019), increased respiratory rate (Coef = 0.176, 95% CI = 0.112 to 0.240), and increased shock index (Coef = 4.232, 95% CI = 2.401 to 6.062) significantly influenced the SOFA score. Increased respiratory rate and shock index were significantly correlated with disease severity in patients with sepsis. Evaluation of these signs may therefore improve early identification of severely ill patients at triage, allowing more aggressive and timely interventions to improve the prognosis of these patients.

  1. On Insensitivity of the Chi-Square Model Test to Nonlinear Misspecification in Structural Equation Models

    ERIC Educational Resources Information Center

    Mooijaart, Ab; Satorra, Albert

    2009-01-01

    In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of…

  2. On One Possible Generalization of the Regression Theorem

    NASA Astrophysics Data System (ADS)

    Bogolubov, N. N.; Soldatov, A. V.

    2018-03-01

    A general approach to derivation of formally exact closed time-local or time-nonlocal evolution equations for non-equilibrium multi-time correlations functions made of observables of an open quantum system interacting simultaneously with external time-dependent classical fields and dissipative environment is discussed. The approach allows for the subsequent treatment of these equations within a perturbative scheme assuming that the system-environment interaction is weak.

  3. Dynamics of Markets

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2009-09-01

    Preface; 1. Econophysics: why and what; 2. Neo-classical economic theory; 3. Probability and stochastic processes; 4. Introduction to financial economics; 5. Introduction to portfolio selection theory; 6. Scaling, pair correlations, and conditional densities; 7. Statistical ensembles: deducing dynamics from time series; 8. Martingale option pricing; 9. FX market globalization: evolution of the dollar to worldwide reserve currency; 10. Macroeconomics and econometrics: regression models vs. empirically based modeling; 11. Complexity; Index.

  4. The Weight of Euro Coins: Its Distribution Might Not Be as Normal as You Would Expect

    ERIC Educational Resources Information Center

    Shkedy, Ziv; Aerts, Marc; Callaert, Herman

    2006-01-01

    Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the…

  5. Rasch-family models are more valuable than score-based approaches for analysing longitudinal patient-reported outcomes with missing data.

    PubMed

    de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique

    2016-10-01

    The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.

  6. Evaluation of the mathematical and economic basis for conversion processes in the LEAP energy-economy model

    NASA Astrophysics Data System (ADS)

    Oblow, E. M.

    1982-10-01

    An evaluation was made of the mathematical and economic basis for conversion processes in the Long-term Energy Analysis Program (LEAP) energy economy model. Conversion processes are the main modeling subunit in LEAP used to represent energy conversion industries and are supposedly based on the classical economic theory of the firm. Questions about uniqueness and existence of LEAP solutions and their relation to classical equilibrium economic theory prompted the study. An analysis of classical theory and LEAP model equations was made to determine their exact relationship. The conclusions drawn from this analysis were that LEAP theory is not consistent with the classical theory of the firm. Specifically, the capacity factor formalism used by LEAP does not support a classical interpretation in terms of a technological production function for energy conversion processes. The economic implications of this inconsistency are suboptimal process operation and short term negative profits in years where plant operation should be terminated. A new capacity factor formalism, which retains the behavioral features of the original model, is proposed to resolve these discrepancies.

  7. Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    2017-06-14

    There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less

  8. Analysis of Flexible Bars and Frames with Large Displacements of Nodes By Finite Element Method in the Form of Classical Mixed Method

    NASA Astrophysics Data System (ADS)

    Ignatyev, A. V.; Ignatyev, V. A.; Onischenko, E. V.

    2017-11-01

    This article is the continuation of the work made bt the authors on the development of the algorithms that implement the finite element method in the form of a classical mixed method for the analysis of geometrically nonlinear bar systems [1-3]. The paper describes an improved algorithm of the formation of the nonlinear governing equations system for flexible plane frames and bars with large displacements of nodes based on the finite element method in a mixed classical form and the use of the procedure of step-by-step loading. An example of the analysis is given.

  9. Semi-classical analysis and pseudo-spectra

    NASA Astrophysics Data System (ADS)

    Davies, E. B.

    We prove an approximate spectral theorem for non-self-adjoint operators and investigate its applications to second-order differential operators in the semi-classical limit. This leads to the construction of a twisted FBI transform. We also investigate the connections between pseudo-spectra and boundary conditions in the semi-classical limit.

  10. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  11. Turning Points in the Development of Classical Musicians

    ERIC Educational Resources Information Center

    Gabor, Elena

    2011-01-01

    This qualitative study investigated the vocational socialization turning points in families of classical musicians. I sampled and interviewed 20 parent-child dyads, for a total of 46 interviews. Data analysis revealed that classical musicians' experiences were marked by 11 turning points that affected their identification with the occupation:…

  12. Autism in Early Childhood: An Unusual Developmental Course—Three Case Reports

    PubMed Central

    Cohen-Ophir, Michal; Castel-Deutsh, Tsophia; Tirosh, Emanuel

    2012-01-01

    Autistic spectrum disorder (ASD) is typically characterized by either an emerging and gradual course or developmental regression in early childhood. The versatile clinical course is progressively acknowledged in recent years. Children with developmental disorders in general are referred to the Child Development Center for a multidisciplinary assessment, investigation, treatment and followup. We report three infants with an initial diagnosis of developmental delays, recovery of normal development following intervention in a multidisciplinary center, and subsequent regression into classic autism following their discharge from the program. An extensive medical workup was noncontributory. This unusual presentation, to our knowledge not reported previously, should be recognized by professionals involved in child development and psychiatry. PMID:22937419

  13. Healthy life expectancy in Hong Kong Special Administrative Region of China.

    PubMed Central

    Law, C. K.; Yip, P. S. F.

    2003-01-01

    Sullivan's method and a regression model were used to calculate healthy life expectancy (HALE) for men and women in Hong Kong Special Administrative Region (Hong Kong SAR) of China. These methods need estimates of the prevalence and information on disability distributions of 109 diseases and HALE for 191 countries by age, sex and region of the world from the WHO's health assessment of 2000. The population of Hong Kong SAR has one of the highest healthy life expectancies in the world. Sullivan's method gives higher estimates than the classic linear regression method. Although Sullivan's method accurately calculates the influence of disease prevalence within small areas and regions, the regression method can approximate HALE for all economies for which information on life expectancy is available. This paper identifies some problems of the two methods and discusses the accuracy of estimates of HALE that rely on data from the WHO assessment. PMID:12640475

  14. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  15. SU-E-I-98: PET/CT's Most-Cited 50 Articles since 2000: A Bibliometric Analysis of Classics.

    PubMed

    Sayed, G

    2012-06-01

    Despite its relatively recent introduction to clinical practice, PET/CT has gained wide acceptance both in diagnostic and therapeutic applications. Scientific publication in PET/CT has also experienced significant development since its introduction. Bibliometric analyses allow an understanding of how this publication trend has developed at an aggregated level. Citation analysis is one of the most widely used bibliometric tools of scientometrics. Analysis of classics, defined as an articles with 100 or more citations, is common in the biomedical sciences as it reflects an article's influence in its professional and scientific community. Our objective was to identify the 50 most frequently cited classic articles in PET/CT in the past 10 years. The 50 most-cited PET/CT articles were identified by searching ISI's Web of Knowledge and Pubmed databases for all related publications from 2000 through 2010. Articles were evaluated for several characteristics such as author(s), institution, country of origin, publication year, type, and number of citations. An unadjusted categorical analysis was performed to compare all articles published in the search period. The search yielded a cumulative total of 22,554 entries for the publication period, of which 15,943 were original research articles. The 50 most-cited articles were identified from the latter sum and selected out of 73 classics. The number of citations for the top 50 classics ranged from 114 to 700. PET/CT classics appeared in three general and 12 core journals. The majority of the classics were in oncologic applications of PET/CT (62%). Articles related to diagnostic topics were 6%. The rest were focused on physics and instrumentation 24% and other basic sciences 16%. Despite its relatively short history PET/CT accumulated 73 classic articles in a decade. Such information is of importance to researchers and those who wish to study the scientific development in the field. © 2012 American Association of Physicists in Medicine.

  16. Eigensystem analysis of classical relaxation techniques with applications to multigrid analysis

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard; Maksymiuk, Catherine

    1987-01-01

    Classical relaxation techniques are related to numerical methods for solution of ordinary differential equations. Eigensystems for Point-Jacobi, Gauss-Seidel, and SOR methods are presented. Solution techniques such as eigenvector annihilation, eigensystem mixing, and multigrid methods are examined with regard to the eigenstructure.

  17. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    USGS Publications Warehouse

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  18. A chemometric approach to the characterisation of historical mortars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rampazzi, L.; Pozzi, A.; Sansonetti, A.

    2006-06-15

    The compositional knowledge of historical mortars is of great concern in case of provenance and dating investigations and of conservation works since the nature of the raw materials suggests the most compatible conservation products. The classic characterisation usually goes through various analytical determinations, while conservation laboratories call for simple and quick analyses able to enlighten the nature of mortars, usually in terms of the binder fraction. A chemometric approach to the matter is here undertaken. Specimens of mortars were prepared with calcitic and dolomitic binders and analysed by Atomic Spectroscopy. Principal Components Analysis (PCA) was used to investigate the featuresmore » of specimens and samples. A Partial Least Square (PLS1) regression was done in order to predict the binder/aggregate ratio. The model was applied to historical mortars from the churches of St. Lorenzo (Milan) and St. Abbondio (Como). The accordance between the predictive model and the real samples is discussed.« less

  19. Emergent biomarker derived from next-generation sequencing to identify pain patients requiring uncommonly high opioid doses

    PubMed Central

    Kringel, D; Ultsch, A; Zimmermann, M; Jansen, J-P; Ilias, W; Freynhagen, R; Griessinger, N; Kopf, A; Stein, C; Doehring, A; Resch, E; Lötsch, J

    2017-01-01

    Next-generation sequencing (NGS) provides unrestricted access to the genome, but it produces ‘big data’ exceeding in amount and complexity the classical analytical approaches. We introduce a bioinformatics-based classifying biomarker that uses emergent properties in genetics to separate pain patients requiring extremely high opioid doses from controls. Following precisely calculated selection of the 34 most informative markers in the OPRM1, OPRK1, OPRD1 and SIGMAR1 genes, pattern of genotypes belonging to either patient group could be derived using a k-nearest neighbor (kNN) classifier that provided a diagnostic accuracy of 80.6±4%. This outperformed alternative classifiers such as reportedly functional opioid receptor gene variants or complex biomarkers obtained via multiple regression or decision tree analysis. The accumulation of several genetic variants with only minor functional influences may result in a qualitative consequence affecting complex phenotypes, pointing at emergent properties in genetics. PMID:27139154

  20. Emergent biomarker derived from next-generation sequencing to identify pain patients requiring uncommonly high opioid doses.

    PubMed

    Kringel, D; Ultsch, A; Zimmermann, M; Jansen, J-P; Ilias, W; Freynhagen, R; Griessinger, N; Kopf, A; Stein, C; Doehring, A; Resch, E; Lötsch, J

    2017-10-01

    Next-generation sequencing (NGS) provides unrestricted access to the genome, but it produces 'big data' exceeding in amount and complexity the classical analytical approaches. We introduce a bioinformatics-based classifying biomarker that uses emergent properties in genetics to separate pain patients requiring extremely high opioid doses from controls. Following precisely calculated selection of the 34 most informative markers in the OPRM1, OPRK1, OPRD1 and SIGMAR1 genes, pattern of genotypes belonging to either patient group could be derived using a k-nearest neighbor (kNN) classifier that provided a diagnostic accuracy of 80.6±4%. This outperformed alternative classifiers such as reportedly functional opioid receptor gene variants or complex biomarkers obtained via multiple regression or decision tree analysis. The accumulation of several genetic variants with only minor functional influences may result in a qualitative consequence affecting complex phenotypes, pointing at emergent properties in genetics.

  1. Edge-Preserving Image Smoothing Constraint in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) of Hyperspectral Data.

    PubMed

    Hugelier, Siewert; Vitale, Raffaele; Ruckebusch, Cyril

    2018-03-01

    This article explores smoothing with edge-preserving properties as a spatial constraint for the resolution of hyperspectral images with multivariate curve resolution-alternating least squares (MCR-ALS). For each constrained component image (distribution map), irrelevant spatial details and noise are smoothed applying an L 1 - or L 0 -norm penalized least squares regression, highlighting in this way big changes in intensity of adjacent pixels. The feasibility of the constraint is demonstrated on three different case studies, in which the objects under investigation are spatially clearly defined, but have significant spectral overlap. This spectral overlap is detrimental for obtaining a good resolution and additional spatial information should be provided. The final results show that the spatial constraint enables better image (map) abstraction, artifact removal, and better interpretation of the results obtained, compared to a classical MCR-ALS analysis of hyperspectral images.

  2. Primordial helium abundance determination using sulphur as metallicity tracer

    NASA Astrophysics Data System (ADS)

    Fernández, Vital; Terlevich, Elena; Díaz, Angeles I.; Terlevich, Roberto; Rosales-Ortega, F. F.

    2018-05-01

    The primordial helium abundance YP is calculated using sulphur as metallicity tracer in the classical methodology (with YP as an extrapolation of Y to zero metals). The calculated value, YP, S = 0.244 ± 0.006, is in good agreement with the estimate from the Planck experiment, as well as, determinations in the literature using oxygen as the metallicity tracer. The chemical analysis includes the sustraction of the nebular continuum and of the stellar continuum computed from simple stellar population synthesis grids. The S+2 content is measured from the near infrared [SIII]λλ9069Å, 9532Å lines, while an ICF(S3 +) is proposed based on the Ar3 +/Ar2 + fraction. Finally, we apply a multivariable linear regression using simultaneously oxygen, nitrogen and sulphur abundances for the same sample to determine the primordial helium abundance resulting in YP - O, N, S = 0.245 ± 0.007.

  3. Analytical solution of Luedeking-Piret equation for a batch fermentation obeying Monod growth kinetics.

    PubMed

    Garnier, Alain; Gaillet, Bruno

    2015-12-01

    Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.

  4. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  5. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  6. Psychosocial work factors and sleep problems: findings from the French national SIP survey.

    PubMed

    Chazelle, Emilie; Chastang, Jean-François; Niedhammer, Isabelle

    2016-04-01

    This study aimed at exploring the cross-sectional and prospective associations between psychosocial work factors and sleep problems. The study population consisted of a national representative sample of the French working population (SIP survey). The sample sizes were 7506 and 3555 for the cross-sectional and prospective analyses. Sleep problems were defined by either sleep disturbances or insufficient sleep duration at least several times a week. Psychosocial work factors included classical (job strain model factors) and emergent factors (recognition, insecurity, role/ethical conflict, emotional demands, work-life imbalance, etc.). Occupational factors related to working time/hours and physical work environment were also included as well as covariates related to factors outside work. Statistical analyses were performed using weighted Poisson regression analysis. In the cross-sectional analyses, psychological demands, low social support, low recognition, emotional demands, perception of danger, work-life imbalance and night work were found to be associated with sleep problems. In the prospective analyses, psychological demands and night work were predictive of sleep problems. Using a less conservative method, more factors were found to be associated with sleep problems. Dose-response associations were observed, showing that the more frequent the exposure to these factors, the higher the risk of sleep problems. No effect of repeated exposure was found on sleep problems. Classical and emergent psychosocial work factors were associated with sleep problems. More prospective studies and prevention policies may be needed.

  7. The Singing Rod (in the Modern Age)

    ERIC Educational Resources Information Center

    Lasby, B.; O'Meara, J. M.; Williams, M.

    2014-01-01

    This is a classic classroom demonstration of resonance, nodes, anti-nodes, and standing waves that has been described elsewhere. The modern age twist that we are advocating is the coupling of this classic demo with free (or relatively inexpensive) sound analysis software, thereby allowing for quantitative analysis of resonance while experimenting…

  8. Data from: Retrospective analysis of a classical biological control programme

    USDA-ARS?s Scientific Manuscript database

    This database contains the raw data for the publication entitled Naranjo, S.E. 2018. Retrospective analysis of a classical biological control programme. Journal of Applied Ecology https://doi.org/10.1111/1365-2664.13163. Specific data include field-based, partial life table data for immature stage...

  9. Laban Movement Analysis Approach to Classical Ballet Pedagogy

    ERIC Educational Resources Information Center

    Whittier, Cadence

    2006-01-01

    As a Certified Laban Movement Analyst and a classically trained ballet dancer, I consistently weave the Laban Movement Analysis/Bartenieff Fundamentals (LMA/BF) theories and philosophies into the ballet class. This integration assists in: (1) Identifying the qualitative movement elements both in the art of ballet and in the students' dancing…

  10. Efficient inference for genetic association studies with multiple outcomes.

    PubMed

    Ruffieux, Helene; Davison, Anthony C; Hager, Jorg; Irincheeva, Irina

    2017-10-01

    Combined inference for heterogeneous high-dimensional data is critical in modern biology, where clinical and various kinds of molecular data may be available from a single study. Classical genetic association studies regress a single clinical outcome on many genetic variants one by one, but there is an increasing demand for joint analysis of many molecular outcomes and genetic variants in order to unravel functional interactions. Unfortunately, most existing approaches to joint modeling are either too simplistic to be powerful or are impracticable for computational reasons. Inspired by Richardson and others (2010, Bayesian Statistics 9), we consider a sparse multivariate regression model that allows simultaneous selection of predictors and associated responses. As Markov chain Monte Carlo (MCMC) inference on such models can be prohibitively slow when the number of genetic variants exceeds a few thousand, we propose a variational inference approach which produces posterior information very close to that of MCMC inference, at a much reduced computational cost. Extensive numerical experiments show that our approach outperforms popular variable selection methods and tailored Bayesian procedures, dealing within hours with problems involving hundreds of thousands of genetic variants and tens to hundreds of clinical or molecular outcomes. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  12. FABP4 and Cardiovascular Events in Peripheral Arterial Disease.

    PubMed

    Höbaus, Clemens; Herz, Carsten Thilo; Pesau, Gerfried; Wrba, Thomas; Koppensteiner, Renate; Schernthaner, Gerit-Holger

    2018-05-01

    Fatty acid-binding protein 4 (FABP4) is a possible biomarker of atherosclerosis. We evaluated FABP4 levels, for the first time, in patients with peripheral artery disease (PAD) and the possible association between baseline FABP4 levels and cardiovascular events over time. Patients (n = 327; mean age 69 ± 10 years) with stable PAD were enrolled in this study. Serum FABP4 was measured by bead-based multiplex assay. Cardiovascular events were analyzed by FABP4 tertiles using Kaplan-Meier and Cox regression analyses after 5 years. Serum FABP4 levels showed a significant association with the classical 3-point major adverse cardiovascular event (MACE) end point (including death, nonlethal myocardial infarction, or nonfatal stroke) in patients with PAD ( P = .038). A standard deviation increase of FABP4 resulted in a hazard ratio (HR) of 1.33 (95% confidence interval [95% CI]: 1.03-1.71) for MACE. This association increased (HR: 1.47, 95% CI: 1.03-1.71) after multivariable adjustment ( P = .020). Additionally, in multivariable linear regression analysis, FABP4 was linked to estimated glomerular filtration rate ( P < .001), gender ( P = .005), fasting triglycerides ( P = .048), and body mass index ( P < .001). Circulating FABP4 may be a useful additional biomarker to evaluate patients with stable PAD at risk of major cardiovascular complications.

  13. Climate regime shifts in paleoclimate time series from the Yucatán Peninsula: from the Preclassic to Classic period

    NASA Astrophysics Data System (ADS)

    Polanco Martínez, Josue M.; Medina-Elizalde, Martin; Burns, Stephen J.; Jiang, Xiuyang; Shen, Chuan-Chou

    2015-04-01

    It has been widely accepted by the paleoclimate and archaeology communities that extreme climate events (especially droughts) and past climate change played an important role in the cultural changes that occurred in at least some parts of the Maya Lowlands, from the Pre-Classic (2000 BC to 250 AD) to Post-Classic periods (1000 to 1521 AD) [1, 2]. In particular, a large number of studies suggest that the decline of the Maya civilization in the Terminal Classic Period was greatly influenced by prolonged severe drought events that probably triggered significant societal disruptions [1, 3, 4, 5]. Going further on these issues, the aim of this work is to detect climate regime shifts in several paleoclimate time series from the Yucatán Peninsula (México) that have been used as rainfall proxies [3, 5, 6, 7]. In order to extract information from the paleoclimate data studied, we have used a change point method [8] as implemented in the R package strucchange, as well as the RAMFIT method [9]. The preliminary results show for all the records analysed a prominent regime shift between 400 to 200 BCE (from a noticeable increase to a remarkable fall in precipitation), which is strongest in the recently obtained stalagmite (Itzamna) delta18-O precipitation record [7]. References [1] Gunn, J. D., Matheny, R. T., Folan, W. J., 2002. Climate-change studies in the Maya area. Ancient Mesoamerica, 13(01), 79-84. [2] Yaeger, J., Hodell, D. A., 2008. The collapse of Maya civilization: assessing the interaction of culture, climate, and environment. El Niño, Catastrophism, and Culture Change in Ancient America, 197-251. [3] Hodell, D. A., Curtis, J. H., Brenner, M., 1995. Possible role of climate in the collapse of Classic Maya civilization. Nature, 375(6530), 391-394. [4] Aimers, J., Hodell, D., 2011. Societal collapse: Drought and the Maya. Nature 479(7371), 44-45 (2011). [5] Medina-Elizalde, M., Rohling, E. J., 2012. Collapse of Classic Maya civilization related to modest reduction in precipitation. Science, 335(6071), 956-959. [6] Medina-Elizalde, M., Burns, S. J., Lea, D. W., Asmerom, Y., von Gunten, L., Polyak, V., Vuille, M., Karmalkar, A., 2010. High resolution stalagmite climate record from the Yucatán Peninsula spanning the Maya terminal classic period. Earth and Planetary Science Letters, 298(1), 255-262. [7] Medina-Elizalde, M., Burns, S. J, Jiang, X., Shen, C. C., Lases-Hernandez, F., Polanco-Martinez, J.M., High-resolution stalagmite record from the Yucatan Peninsula spanning the Preclassic period, work in progress to be submitted to the Global Planetary Change (by invitation). [8] Zeileis, A., Leisch, F., Hornik, K., Kleiber, C., 2002. strucchange: An R Package for Testing for Structural Change in Linear Regression Models. Journal of statistical software, 7(2), 1-38. [9] Mudelsee, M. (2000). Ramp function regression: a tool for quantifying climate transitions. Computers & Geosciences, 26(3), 293-307.

  14. Friedrich Nietzsche in Basel: An Apology for Classical Studies

    ERIC Educational Resources Information Center

    Santini, Carlotta

    2018-01-01

    Alongside his work as a professor of Greek Language and Literature at the University of Basel, Friedrich Nietzsche reflected on the value of classical studies in contemporary nineteenth-century society, starting with a self-analysis of his own classical training and position as a philologist and teacher. Contrary to his well-known aversion to…

  15. Fourier Analysis in Introductory Physics

    ERIC Educational Resources Information Center

    Huggins, Elisha

    2007-01-01

    In an after-dinner talk at the fall 2005 meeting of the New England chapter of the AAPT, Professor Robert Arns drew an analogy between classical physics and Classic Coke. To generations of physics teachers and textbook writers, classical physics was the real thing. Modern physics, which in introductory textbooks "appears in one or more extra…

  16. Multimodal Image Analysis in Alzheimer’s Disease via Statistical Modelling of Non-local Intensity Correlations

    NASA Astrophysics Data System (ADS)

    Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien

    2016-04-01

    The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.

  17. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  18. Use of segmented constrained layer damping treatment for improved helicopter aeromechanical stability

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu

    2000-08-01

    The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.

  19. Multicomponent blood lipid analysis by means of near infrared spectroscopy, in geese.

    PubMed

    Bazar, George; Eles, Viktoria; Kovacs, Zoltan; Romvari, Robert; Szabo, Andras

    2016-08-01

    This study provides accurate near infrared (NIR) spectroscopic models on some laboratory determined clinicochemical parameters (i.e. total lipid (5.57±1.95 g/l), triglyceride (2.59±1.36 mmol/l), total cholesterol (3.81±0.68 mmol/l), high density lipoprotein (HDL) cholesterol (2.45±0.58 mmol/l)) of blood serum samples of fattened geese. To increase the performance of multivariate chemometrics, samples significantly deviating from the regression models implying laboratory error were excluded from the final calibration datasets. Reference data of excluded samples having outlier spectra in principal component analysis were not marked as false. Samples deviating from the regression models but having non outlier spectra in PCA were identified as having false reference constituent values. Based on the NIR selection methods, 5% of the reference measurement data were rated as doubtful. The achieved models reached R(2) of 0.864, 0.966, 0.850, 0.793, and RMSE of 0.639 g/l, 0.232 mmol/l, 0.210 mmol/l, 0.241 mmol/l for total lipid, triglyceride, total cholesterol and HDL cholesterol, respectively, during independent validation. Classical analytical techniques focus on single constituents and often require chemicals, time-consuming measurements, and experienced technicians. NIR technique provides a quick, cost effective, non-hazardous alternative method for analysis of several constituents based on one single spectrum of each sample, and it also offers the possibility for looking at the laboratory reference data critically. Evaluation of reference data to identify and exclude falsely analyzed samples can provide warning feedback to the reference laboratory, especially in the case of analyses where laboratory methods are not perfectly suited to the subjected material and there is an increased chance of laboratory error. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Analytic Methods for Adjusting Subjective Rating Schemes

    DTIC Science & Technology

    1976-06-01

    individual performance. The approach developed here is a variant of the classical linear regression model. Specifically, it la proposed that...values of y and X. Moreover, this difference la gener- ally independent of sample size, so that LS estimates are different from ML estimates at...baervationa. H^ever, aa T. -. - ,„ aU . th(. Hit (4.10) la aatlafled, and EKV and ML eatlnatea are equlvalent A practical proble, in applying

  1. Challenges of Electronic Medical Surveillance Systems

    DTIC Science & Technology

    2004-06-01

    More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus

  2. MOTIVATION INTERNALIZATION AND SIMPLEX STRUCTURE IN SELF-DETERMINATION THEORY.

    PubMed

    Ünlü, Ali; Dettweiler, Ulrich

    2015-12-01

    Self-determination theory, as proposed by Deci and Ryan, postulated different types of motivation regulation. As to the introjected and identified regulation of extrinsic motivation, their internalizations were described as "somewhat external" and "somewhat internal" and remained undetermined in the theory. This paper introduces a constrained regression analysis that allows these vaguely expressed motivations to be estimated in an "optimal" manner, in any given empirical context. The approach was even generalized and applied for simplex structure analysis in self-determination theory. The technique was exemplified with an empirical study comparing science teaching in a classical school class versus an expeditionary outdoor program. Based on a sample of 84 German pupils (43 girls, 41 boys, 10 to 12 years old), data were collected using the German version of the Academic Self-Regulation Questionnaire. The science-teaching format was seen to not influence the pupils' internalization of identified regulation. The internalization of introjected regulation differed and shifted more toward the external pole in the outdoor teaching format. The quantification approach supported the simplex structure of self-determination theory, whereas correlations may disconfirm the simplex structure.

  3. Electrostatic and structural similarity of classical and non-classical lactam compounds

    NASA Astrophysics Data System (ADS)

    Coll, Miguel; Frau, Juan; Vilanova, Bartolomé; Donoso, Josefa; Muñoz, Francisco

    2001-09-01

    Various electrostatic and structural parameters for a series of classical and non-classical β-lactams were determined and compared in order to ascertain whether some specific β-lactams possess antibacterial or β-lactamase inhibitory properties. The electrostatic parameters obtained, based on the Distributed Multipole Analysis (DMA) of high-quality wavefunctions for the studied structures, suggest that some non-classical β-lactams effectively inhibit the action of β-lactamases. As shown in this work, such electrostatic parameters provide much more reliable information about the antibacterial and inhibitory properties of β-lactams than do structural parameters.

  4. Individual factors associated with L- and H-type Bovine Spongiform Encephalopathy in France

    PubMed Central

    2012-01-01

    Background Cattle with L-type (L-BSE) and H-type (H-BSE) atypical Bovine Spongiform encephalopathy (BSE) were identified in 2003 in Italy and France respectively before being identified in other countries worldwide. As of December 2011, around 60 atypical BSE cases have currently been reported in 13 countries, with over one third in France. While the epidemiology of classical BSE (C-BSE) has been widely described, atypical BSEs are still poorly documented, but appear to differ from C-BSE. We analysed the epidemiological characteristics of the 12 cases of L-BSE and 11 cases of H-BSE detected in France from January 2001 to late 2009 and looked for individual risk factors. As L-BSE cases did not appear to be homogeneously distributed throughout the country, two complementary methods were used: spatial analysis and regression modelling. L-BSE and H-BSE were studied separately as both the biochemical properties of their pathological prion protein and their features differ in animal models. Results The median age at detection for L-BSE and H-BSE cases was 12.4 (range 8.4-18.7) and 12.5 (8.3-18.2) years respectively, with no significant difference between the two distributions. However, this median age differed significantly from that of classical BSE (7.0 (range 3.5-15.4) years). A significant geographical cluster was detected for L-BSE. Among animals over eight years of age, we showed that the risk of being detected as a L-BSE case increased with age at death. This was not the case for H-BSE. Conclusion To the best of our knowledge this is the first study to describe the epidemiology of the two types of atypical BSE. The geographical cluster detected for L-BSE could be partly due to the age structure of the background-tested bovine population. Our regression analyses, which adjusted for the effect of age and birth cohort showed an age effect for L-BSE and the descriptive analysis showed a particular age structure in the area where the cluster was detected. No birth cohort effect was evident. The relatively small number of cases of atypical BSE and the few individual data available for the tested population limited our analysis to the investigation of age and cohort effect only. We conclude that it is essential to maintain BSE surveillance to further elucidate our findings. PMID:22647660

  5. A Review of Classical Methods of Item Analysis.

    ERIC Educational Resources Information Center

    French, Christine L.

    Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…

  6. Soprano and source: A laryngographic analysis

    NASA Astrophysics Data System (ADS)

    Bateman, Laura Anne

    2005-04-01

    Popular music in the 21st century uses a particular singing quality for female voice that is quite different from the trained classical singing quality. Classical quality has been the subject of a vast body of research, whereas research that deals with non-classical qualities is limited. In order to learn more about these issues, the author chose to do research on singing qualities using a variety of standard voice quality tests. This paper looks at voice qualities found in various different styles of singing: Classical, Belt, Legit, R&B, Jazz, Country, and Pop. The data was elicited from a professional soprano and the voice qualities reflect industry standards. The data set for this paper is limited to samples using the vowel [i]. Laryngographic (LGG) data was generated simultaneously with the audio samples. This paper will focus on the results of the LGG analysis; however, an audio analysis was also performed using Spectrogram, LPC, and FFT. Data from the LGG is used to calculate the contact quotient, speed quotient, and ascending slope. The LGG waveform is also visually assessed. The LGG analysis gives insights into the source vibration for the different singing styles.

  7. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  8. Analysis and Synthesis of Adaptive Neural Elements and Assemblies

    DTIC Science & Technology

    1992-12-14

    network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning , was...activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was demonstrated to support many...network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was

  9. Association between borderline dysnatremia and mortality insight into a new data mining approach.

    PubMed

    Girardeau, Yannick; Jannot, Anne-Sophie; Chatellier, Gilles; Saint-Jean, Olivier

    2017-11-22

    Even small variations of serum sodium concentration may be associated with mortality. Our objective was to confirm the impact of borderline dysnatremia for patients admitted to hospital on in-hospital mortality using real life care data from our electronic health record (EHR) and a phenome-wide association analysis (PheWAS). Retrospective observational study based on patient data admitted to Hôpital Européen George Pompidou, between 01/01/2008 and 31/06/2014; including 45,834 patients with serum sodium determinations on admission. We analyzed the association between dysnatremia and in-hospital mortality, using a multivariate logistic regression model to adjust for classical potential confounders. We performed a PheWAS to identify new potential confounders. Hyponatremia and hypernatremia were recorded for 12.0% and 1.0% of hospital stays, respectively. Adjusted odds ratios (ORa) for severe, moderate and borderline hyponatremia were 3.44 (95% CI, 2.41-4.86), 2.48 (95% CI, 1.96-3.13) and 1.98 (95% CI, 1.73-2.28), respectively. ORa for severe, moderate and borderline hypernatremia were 4.07 (95% CI, 2.92-5.62), 4.42 (95% CI, 2.04-9.20) and 3.72 (95% CI, 1.53-8.45), respectively. Borderline hyponatremia (ORa = 1.57 95% CI, 1.35-1.81) and borderline hypernatremia (ORa = 3.47 95% CI, 2.43-4.90) were still associated with in-hospital mortality after adjustment for classical and new confounding factors identified through the PheWAS analysis. Borderline dysnatremia on admission are independently associated with a higher risk of in-hospital mortality. By using medical data automatically collected in EHR and a new data mining approach, we identified new potential confounding factors that were highly associated with both mortality and dysnatremia.

  10. H-classic: a new method to identify classic articles in Implant Dentistry, Periodontics, and Oral Surgery.

    PubMed

    De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique

    2016-10-01

    The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  12. Identification by random forest method of HLA class I amino acid substitutions associated with lower survival at day 100 in unrelated donor hematopoietic cell transplantation.

    PubMed

    Marino, S R; Lin, S; Maiers, M; Haagenson, M; Spellman, S; Klein, J P; Binkowski, T A; Lee, S J; van Besien, K

    2012-02-01

    The identification of important amino acid substitutions associated with low survival in hematopoietic cell transplantation (HCT) is hampered by the large number of observed substitutions compared with the small number of patients available for analysis. Random forest analysis is designed to address these limitations. We studied 2107 HCT recipients with good or intermediate risk hematological malignancies to identify HLA class I amino acid substitutions associated with reduced survival at day 100 post transplant. Random forest analysis and traditional univariate and multivariate analyses were used. Random forest analysis identified amino acid substitutions in 33 positions that were associated with reduced 100 day survival, including HLA-A 9, 43, 62, 63, 76, 77, 95, 97, 114, 116, 152, 156, 166 and 167; HLA-B 97, 109, 116 and 156; and HLA-C 6, 9, 11, 14, 21, 66, 77, 80, 95, 97, 99, 116, 156, 163 and 173. In all 13 had been previously reported by other investigators using classical biostatistical approaches. Using the same data set, traditional multivariate logistic regression identified only five amino acid substitutions associated with lower day 100 survival. Random forest analysis is a novel statistical methodology for analysis of HLA mismatching and outcome studies, capable of identifying important amino acid substitutions missed by other methods.

  13. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  14. Classical analysis of quantum phase transitions in a bilayer model.

    PubMed

    Figueiredo, Mariane Camargos; Cotta, Tathiana Moreira; Pellegrino, Giancarlo Queiroz

    2010-01-01

    In this Brief Report we extend the classical analysis performed on the schematic model proposed in [T. Moreira, G. Q. Pellegrino, J. G. Peixoto de Faria, M. C. Nemes, F. Camargo, and A. F. R. Toledo Piza, Phys. Rev. E 77, 051102 (2008)] concerning quantum phase transitions in a bilayer system. We show that appropriate integrations along the classical periodic orbits reproduce with excellent agreement both the quantum spectrum and the expected mean value for the number of excitons in the system, quantities which are directly related to the observed boson-fermion quantum phase transition.

  15. PBIS May Not Qualify as Classical Applied Behavior Analysis. So What?

    PubMed

    Critchfield, Thomas S

    2015-05-01

    Some disagreement exists over whether Positive Behavior Interventions and Supports (PBIS) embodies the features of Applied Behavior Analysis (ABA) as described in a classic 1968 paper by Baer, Wolf, and Risley. When it comes to disseminating interventions at a societal level, a more compelling issue is whether ABA should become more like PBIS.

  16. Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2011-01-01

    A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…

  17. Legitimate Techniques for Improving the R-Square and Related Statistics of a Multiple Regression Model

    DTIC Science & Technology

    1981-01-01

    explanatory variable has been ommitted. Ramsey (1974) has developed a rather interesting test for detecting specification errors using estimates of the...Peter. (1979) A Guide to Econometrics , Cambridge, MA: The MIT Press. Ramsey , J.B. (1974), "Classical Model Selection Through Specification Error... Tests ," in P. Zarembka, Ed. Frontiers in Econometrics , New York: Academia Press. Theil, Henri. (1971), Principles of Econometrics , New York: John Wiley

  18. Analysis of MHC class I genes across horse MHC haplotypes

    PubMed Central

    Tallmadge, Rebecca L.; Campbell, Julie A.; Miller, Donald C.; Antczak, Douglas F.

    2010-01-01

    The genomic sequences of 15 horse Major Histocompatibility Complex (MHC) class I genes and a collection of MHC class I homozygous horses of five different haplotypes were used to investigate the genomic structure and polymorphism of the equine MHC. A combination of conserved and locus-specific primers was used to amplify horse MHC class I genes with classical and non-classical characteristics. Multiple clones from each haplotype identified three to five classical sequences per homozygous animal, and two to three non-classical sequences. Phylogenetic analysis was applied to these sequences and groups were identified which appear to be allelic series, but some sequences were left ungrouped. Sequences determined from MHC class I heterozygous horses and previously described MHC class I sequences were then added, representing a total of ten horse MHC haplotypes. These results were consistent with those obtained from the MHC homozygous horses alone, and 30 classical sequences were assigned to four previously confirmed loci and three new provisional loci. The non-classical genes had few alleles and the classical genes had higher levels of allelic polymorphism. Alleles for two classical loci with the expected pattern of polymorphism were found in the majority of haplotypes tested, but alleles at two other commonly detected loci had more variation outside of the hypervariable region than within. Our data indicate that the equine Major Histocompatibility Complex is characterized by variation in the complement of class I genes expressed in different haplotypes in addition to the expected allelic polymorphism within loci. PMID:20099063

  19. Circulating CD34-Positive Cells Are Associated with Handgrip Strength in Japanese Older Men: The Nagasaki Islands Study.

    PubMed

    Yamanashi, H; Shimizu, Y; Koyamatsu, J; Nagayoshi, M; Kadota, K; Tamai, M; Maeda, T

    2017-01-01

    Handgrip strength is a simple measurement of overall muscular strength and is used to detect sarcopenia. It also predicts adverse events in later life. Many mechanisms of sarcopenia development have been reported. A hypertensive status impairs endothelial dysfunction, which might deteriorate skeletal muscle if vascular angiogenesis is not maintained. This study investigated muscle strength and circulating CD34-positive cells as a marker of vascular angiogenesis. Cross-sectional study. 262 male Japanese community dwellers aged 60 to 69 years. The participants' handgrip strength, medical history, and blood samples were taken. We stratified the participants by hypertensive status to investigate the association between handgrip strength and circulating CD34-positive cells according to hypertensive status. Pearson correlation and linear regression analyses were used. In the Pearson correlation analysis, handgrip strength and the logarithm of circulating CD34-positive cells were significantly associated in hypertensive participants (r=0.22, p=0.021), but not in non-hypertensive participants (r=-0.01, p=0.943). This relationship was only significant in hypertensive participants (ß=1.94, p=0.021) in the simple linear regression analysis, and it remained significant after adjusting for classic cardiovascular risk factors (ß=1.92, p=0.020). The relationship was not significant in non-hypertensive participants (ß=-0.09, p=0.903). We found a positive association between handgrip strength and circulating CD34-positive cells in hypertensive men. Vascular maintenance attributed by circulating CD34-positive cells is thought to be a background mechanism of this association after hypertension-induced vascular injury in skeletal muscle.

  20. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  1. Non-classical and potential symmetry analysis of Richard's equation for moisture flow in soil

    NASA Astrophysics Data System (ADS)

    Wiltshire, Ron; El-Kafri, Manal

    2004-01-01

    This paper focuses upon the derivation of the non-classical symmetries of Bluman and Cole as they apply to Richard's equation for water flow in an unsaturated uniform soil. It is shown that the determining equations for the non-classical case lead to four highly non-linear equations which have been solved in five particular cases. In each case the corresponding similarity ansatz has been derived and Richard's equation is reduced to an ordinary differential equation. Explicit solutions are produced when possible. Richard's equation is also expressed as a potential system and in reviewing the classical Lie solutions a new symmetry is derived together with its similarity ansatz. Determining equations are then produced for the potential system using the non-classical algorithm. This results in an under-determined set of equations and an example symmetry that reveals a missing classical case is presented. An example of a classical and a non-classical symmetry reduction applied to the infiltration of moisture in soil is presented. The condition for surface invariance is used to demonstrate the equivalence of a classical Lie and a potential symmetry.

  2. Cinematic Landscapes of Teaching: Lessons from a Narrative of Classic Film

    ERIC Educational Resources Information Center

    Cary, Lisa J.; Reifel, Stuart

    2005-01-01

    The purpose of this inquiry was to utilize the concept of "landscapes of teaching" in the analysis of a classic film about a venerated teacher, "Goodbye, Mr. Chips" (1939). First, the aim of the analysis is to provide insights into teacher development and to discuss the sacred and mystical dimensions of teaching (Craig, 1995). Second, the analysis…

  3. The Journey from Classical to Quantum Thinking: An Analysis of Student Understanding through the Lens of Atomic Spectra

    ERIC Educational Resources Information Center

    Rao, Sandhya Kolla

    2012-01-01

    This dissertation aims to explore how students think about atomic absorption and emission of light in the area of introductory quantum chemistry. In particular, the impact of classical ideas of electron position and energy on student understanding of spectra is studied. The analysis was undertaken to discover how student learning can be…

  4. Novel and successful free comments method for sensory characterization of chocolate ice cream: A comparative study between pivot profile and comment analysis.

    PubMed

    Fonseca, Fernando G A; Esmerino, Erick A; Filho, Elson R Tavares; Ferraz, Juliana P; da Cruz, Adriano G; Bolini, Helena M A

    2016-05-01

    Rapid sensory profiling methods have gained space in the sensory evaluation field. Techniques using direct analysis of the terms generated by consumers are considered easy to perform, without specific training requirements, thus improving knowledge about consumer perceptions on various products. This study aimed to determine the sensory profile of different commercial samples of chocolate ice cream, labeled as conventional and light or diet, using the "comment analysis" and "pivot profile" methods, based on consumers' perceptions. In the comment analysis task, consumers responded to 2 separate open questions describing the sensory attributes they liked or disliked in each sample. In the pivot profile method, samples were served in pairs (consisting of a coded sample and pivot), and consumers indicated the higher and lower intensity attributes in the target sample compared with the pivot. We observed that both methods were able to characterize the different chocolate ice cream samples using consumer perception, with high correlation results and configurational similarity (regression vector coefficient=0.917) between them. However, it is worth emphasizing that comment analysis is performed intuitively by consumers, whereas the pivot profile method showed high analytical and discriminative power even using consumers, proving to be a promising technique for routine application when classical descriptive methods cannot be used. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. The Pendulum as a Vehicle for Transitioning from Classical to Quantum Physics: History, Quantum Concepts, and Educational Challenges

    ERIC Educational Resources Information Center

    Barnes, Marianne B.; Garner, James; Reid, David

    2004-01-01

    In this article we use the pendulum as the vehicle for discussing the transition from classical to quantum physics. Since student knowledge of the classical pendulum can be generalized to all harmonic oscillators, we propose that a quantum analysis of the pendulum can lead students into the unanticipated consequences of quantum phenomena at the…

  6. Rett syndrome diagnostic criteria: lessons from the Natural History Study.

    PubMed

    Percy, Alan K; Neul, Jeffrey L; Glaze, Daniel G; Motil, Kathleen J; Skinner, Steven A; Khwaja, Omar; Lee, Hye-Seung; Lane, Jane B; Barrish, Judy O; Annese, Fran; McNair, Lauren; Graham, Joy; Barnes, Katherine

    2010-12-01

    Analysis of 819 participants enrolled in the Rett syndrome (RTT) Natural History Study validates recently revised diagnostic criteria. 765 females fulfilled 2002 consensus criteria for classic (653/85.4%) or variant (112/14.6%) RTT. All participants classified as classic RTT fulfilled each revised main criterion; supportive criteria were not uniformly present. All variant RTT participants met at least 3 of 6 main criteria in the 2002, 2 of 4 main criteria in the current format, and 5 of 11 supportive criteria in both. This analysis underscores the critical role of main criteria for classic RTT; variant RTT requires both main and supportive criteria.

  7. Western classical music development: a statistical analysis of composers similarity, differentiation and evolution.

    PubMed

    Georges, Patrick

    2017-01-01

    This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers 'sound' different even if their 'lineages' (influences network) are similar or why they 'sound' alike if their 'lineages' are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific 'ecological' characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music.

  8. Role of classic signs as diagnostic predictors for enteric fever among returned travellers: Relative bradycardia and eosinopenia.

    PubMed

    Matono, Takashi; Kutsuna, Satoshi; Kato, Yasuyuki; Katanami, Yuichi; Yamamoto, Kei; Takeshita, Nozomi; Hayakawa, Kayoko; Kanagawa, Shuzo; Kaku, Mitsuo; Ohmagari, Norio

    2017-01-01

    The lack of characteristic clinical findings and accurate diagnostic tools has made the diagnosis of enteric fever difficult. We evaluated the classic signs of relative bradycardia and eosinopenia as diagnostic predictors for enteric fever among travellers who had returned from the tropics or subtropics. This matched case-control study used data from 2006 to 2015 for culture-proven enteric fever patients as cases. Febrile patients (>38.3°C) with non-enteric fever, who had returned from the tropics or subtropics, were matched to the cases in a 1:3 ratio by age (±3 years), sex, and year of diagnosis as controls. Cunha's criteria were used for relative bradycardia. Absolute eosinopenia was defined as an eosinophilic count of 0/μL. Data from 160 patients (40 cases and 120 controls) were analysed. Cases predominantly returned from South Asia (70% versus 18%, p <0.001). Relative bradycardia (88% versus 51%, p <0.001) and absolute eosinopenia (63% versus 38%, p = 0.008) were more frequent in cases than controls. In multivariate logistic regression analysis, return from South Asia (aOR: 21.6; 95% CI: 7.17-64.9) and relative bradycardia (aOR: 11.7; 95% CI: 3.21-42.5) were independent predictors for a diagnosis of enteric fever. The positive likelihood ratio was 4.00 (95% CI: 2.58-6.20) for return from South Asia, 1.72 (95% CI: 1.39-2.13) for relative bradycardia, and 1.63 (95%CI: 1.17-2.27) for absolute eosinopenia. The negative predictive values of the three variables were notably high (83-92%);. however, positive predictive values were 35-57%. The classic signs of relative bradycardia and eosinopenia were not specific for enteric fever; however both met the criteria for being diagnostic predictors for enteric fever. Among febrile returned travellers, relative bradycardia and eosinopenia should be re-evaluated for predicting a diagnosis of enteric fever in non-endemic areas prior to obtaining blood cultures.

  9. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  10. Niels Bohr as philosopher of experiment: Does decoherence theory challenge Bohr's doctrine of classical concepts?

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian; Schlosshauer, Maximilian

    2015-02-01

    Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.

  11. Circular Regression in a Dual-Phase Lock-In Amplifier for Coherent Detection of Weak Signal

    PubMed Central

    Wang, Gaoxuan; Reboul, Serge; Fertein, Eric

    2017-01-01

    Lock-in amplification (LIA) is an effective approach for recovery of weak signal buried in noise. Determination of the input signal amplitude in a classical dual-phase LIA is based on incoherent detection which leads to a biased estimation at low signal-to-noise ratio. This article presents, for the first time to our knowledge, a new architecture of LIA involving phase estimation with a linear-circular regression for coherent detection. The proposed phase delay estimate, between the input signal and a reference, is defined as the maximum-likelihood of a set of observations distributed according to a von Mises distribution. In our implementation this maximum is obtained with a Newton Raphson algorithm. We show that the proposed LIA architecture provides an unbiased estimate of the input signal amplitude. Theoretical simulations with synthetic data demonstrate that the classical LIA estimates are biased for SNR of the input signal lower than −20 dB, while the proposed LIA is able to accurately recover the weak signal amplitude. The novel approach is applied to an optical sensor for accurate measurement of NO2 concentrations at the sub-ppbv level in the atmosphere. Side-by-side intercomparison measurements with a commercial LIA (SR830, Stanford Research Inc., Sunnyvale, CA, USA ) demonstrate that the proposed LIA has an identical performance in terms of measurement accuracy and precision but with simplified hardware architecture. PMID:29135951

  12. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  13. Differences in Kaposi sarcoma-associated herpesvirus-specific and –nonspecific immune responses in classic Kaposi sarcoma cases and matched controls in Sicily

    PubMed Central

    Amodio, Emanuele; Goedert, James J.; Barozzi, Patrizia; Riva, Giovanni; Firenze, Alberto; Bonura, Filippa; Viviano, Enza; Romano, Nino; Luppi, Mario

    2011-01-01

    SUMMARY Kaposi sarcoma (KS) may develop because of incompetent immune responses, both nonspecifically and specifically against the KS-associated herpes virus (KSHV). Peripheral blood mononuclear cells from 15 classic (non-AIDS) KS cases, 13 KSHV seropositives (without KS), and 15 KSHV-seronegative controls were tested for interferon-γ T-cell (Elispot) responses to KSHV-LANA, KSHV-K8.1, and CMV/EBV peptide pools. The forearm and thigh of each participant also was tested for delayed-type hypersensitivity (DTH) against common recall antigens. Groups were compared with Fisher exact test and multinomial logistic regression to calculate odds ratios (OR) and 95% confidence intervals (CI). KSHV Elispot response was detected in 10 (67%) classic KS cases, 11 (85%) KSHV seropositives (without KS), and 2 (13%) seronegative controls. All 4 cases with KSHV-LANA responses had current KS lesions, whereas 5 of 6 cases with KSHV-K8.1 responses had no lesions (P=0.048). No case responded to both LANA and K8.1. Compared to seronegative controls, risk for classic KS was inversely related to DTH in the thigh (OR 0.71, 95% CI 0.55–0.94, P=0.01), directly associated with DTH in the forearm (OR 1.35, 95% CI 1.02–1.80, P=0.04), and tended to be increased 5-fold per KSHV Elispot response (OR 5.13, 95% CI 0.86–30.77, P=0.07). Compared to KSHV seropositives (without KS), risk for classic KS, was reduced 5-fold (OR 0.20, CI 0.03–0.77, P=0.04) per KSHV response. CMV/EBV Elispot responses were irrelevant. Deficiency of both KSHV-specific and –nonspecific immunity is associated with classic KS. This may clarify why Kaposi sarcoma responds to immune reconstitution. PMID:21740480

  14. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    PubMed Central

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-01-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405

  15. Hybrid propulsion technology program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.

  16. The swan-song phenomenon: last-works effects for 172 classical composers.

    PubMed

    Simonton, D K

    1989-03-01

    Creative individuals approaching their final years of life may undergo a transformation in outlook that is reflected in their last works. This hypothesized effect was quantitatively assessed for an extensive sample of 1,919 works by 172 classical composers. The works were independently gauged on seven aesthetic attributes (melodic originality, melodic variation, repertoire popularity, aesthetic significance, listener accessibility, performance duration, and thematic size), and potential last-works effects were operationally defined two separate ways (linearly and exponentially). Statistical controls were introduced for both longitudinal changes (linear, quadratic, and cubic age functions) and individual differences (eminence and lifetime productivity). Hierarchical regression analyses indicated that composers' swan songs tend to score lower in melodic originality and performance duration but higher in repertoire popularity and aesthetic significance. These last-works effects survive control for total compositional output, eminence, and most significantly, the composer's age when the last works were created.

  17. Reading the World's Classics Critically: A Keyword-Based Approach to Literary Analysis in Foreign Language Studies

    ERIC Educational Resources Information Center

    García, Nuria Alonso; Caplan, Alison

    2014-01-01

    While there are a number of important critical pedagogies being proposed in the field of foreign language study, more attention should be given to providing concrete examples of how to apply these ideas in the classroom. This article offers a new approach to the textual analysis of literary classics through the keyword-based methodology originally…

  18. An Analysis of Cross Racial Identity Scale Scores Using Classical Test Theory and Rasch Item Response Models

    ERIC Educational Resources Information Center

    Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie

    2013-01-01

    Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…

  19. Performance Analysis and Optimization of the Winnow Secret Key Reconciliation Protocol

    DTIC Science & Technology

    2011-06-01

    use in a quantum key system can be defined in two ways :  The number of messages passed between Alice and Bob  The...classical and quantum environment. Post- quantum cryptography , which is generally used to describe classical quantum -resilient protocols, includes...composed of a one- way quantum channel and a two - way classical channel. Owing to the physics of the channel, the quantum channel is subject to

  20. Heterogeneity in the Strehler-Mildvan general theory of mortality and aging.

    PubMed

    Zheng, Hui; Yang, Yang; Land, Kenneth C

    2011-02-01

    This study examines and further develops the classic Strehler-Mildvan (SM) general theory of mortality and aging. Three predictions from the SM theory are tested by examining the age dependence of mortality patterns for 42 countries (including developed and developing countries) over the period 1955-2003. By applying finite mixture regression models, principal component analysis, and random-effects panel regression models, we find that (1) the negative correlation between the initial adulthood mortality rate and the rate of increase in mortality with age derived in the SM theory exists but is not constant; (2) within the SM framework, the implied age of expected zero vitality (expected maximum survival age) also is variable over time; (3) longevity trajectories are not homogeneous among the countries; (4) Central American and Southeast Asian countries have higher expected age of zero vitality than other countries in spite of relatively disadvantageous national ecological systems; (5) within the group of Central American and Southeast Asian countries, a more disadvantageous national ecological system is associated with a higher expected age of zero vitality; and (6) larger agricultural and food productivities, higher labor participation rates, higher percentages of population living in urban areas, and larger GDP per capita and GDP per unit of energy use are important beneficial national ecological system factors that can promote survival. These findings indicate that the SM theory needs to be generalized to incorporate heterogeneity among human populations.

  1. Chorea in Late-Infantile Neuronal Ceroid Lipofuscinosis: An Atypical Presentation.

    PubMed

    Saini, Arushi Gahlot; Sankhyan, Naveen; Singhi, Pratibha

    2016-07-01

    Classic late-infantile neuronal ceroid lipofuscinosis is characterized by progressive intellectual and motor deterioration, seizures, vision loss, and early death. Prominent chorea is an atypical feature and is rarely described in children. A four-year-old girl with seizures followed by a year-long progressive cognitive decline and a three month history of intermittent chorea leading to rapid motor deterioration. The onset of illness was marked by seizures occurring as generalized tonic-clonic seizures and myoclonic jerks. There was gradual regression of cognitive milestones with increasing forgetfulness and impaired quality and content of speech. Nine months later, she developed chorea. These movements were associated with clumsiness, incoordination, and progressive loss of motor milestones. She was unable to perform manual tasks or maintain antigravity posture resulting in unsteadiness and frequent falls. The movements were aggravated by action or excitement and were absent in sleep. Magnetic resonance imaging depicted diffuse cerebral and cerebellar atrophy. Sequencing analysis of TPP1 gene showed a novel, homozygous, splice site mutation c.89+1G>A which resulted in nil enzyme activity and a severe phenotype with onset of disease symptoms at an early age of three years. The presence of chorea in late-infantile neuronal ceroid lipofuscinoses is atypical but does not exclude the diagnosis of late-infantile neuronal ceroid lipofuscinoses, especially in children with psychomotor regression, seizures and diffuse brain atrophy. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Extreme value analysis in biometrics.

    PubMed

    Hüsler, Jürg

    2009-04-01

    We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.

  3. Advances in data processing for open-path Fourier transform infrared spectrometry of greenhouse gases.

    PubMed

    Shao, Limin; Griffiths, Peter R; Leytem, April B

    2010-10-01

    The automated quantification of three greenhouse gases, ammonia, methane, and nitrous oxide, in the vicinity of a large dairy farm by open-path Fourier transform infrared (OP/FT-IR) spectrometry at intervals of 5 min is demonstrated. Spectral pretreatment, including the automated detection and correction of the effect of interrupting the infrared beam, is by a moving object, and the automated correction for the nonlinear detector response is applied to the measured interferograms. Two ways of obtaining quantitative data from OP/FT-IR data are described. The first, which is installed in a recently acquired commercial OP/FT-IR spectrometer, is based on classical least-squares (CLS) regression, and the second is based on partial least-squares (PLS) regression. It is shown that CLS regression only gives accurate results if the absorption features of the analytes are located in very short spectral intervals where lines due to atmospheric water vapor are absent or very weak; of the three analytes examined, only ammonia fell into this category. On the other hand, PLS regression works allowed what appeared to be accurate results to be obtained for all three analytes.

  4. SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Naqa, I; Ten, R

    Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less

  5. Classical analogous of quantum cosmological perfect fluid models

    NASA Astrophysics Data System (ADS)

    Batista, A. B.; Fabris, J. C.; Gonçalves, S. V. B.; Tossa, J.

    2001-05-01

    Quantization in the minisuperspace of a gravity system coupled to a perfect fluid, leads to a solvable model which implies singularity free solutions through the construction of a superposition of the wavefunctions. We show that such models are equivalent to a classical system where, besides the perfect fluid, a repulsive fluid with an equation of state pQ= ρQ is present. This leads to speculate on the true nature of this quantization procedure. A perturbative analysis of the classical system reveals the condition for the stability of the classical system in terms of the existence of an anti-gravity phase.

  6. Fourier Analysis in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Huggins, Elisha

    2007-01-01

    In an after-dinner talk at the fall 2005 meeting of the New England chapter of the AAPT, Professor Robert Arns drew an analogy between classical physics and Classic Coke. To generations of physics teachers and textbook writers, classical physics was the real thing. Modern physics, which in introductory textbooks "appears in one or more extra chapters at the end of the book, … is a divertimento that we might get to if time permits." Modern physics is more like vanilla or lime Coke, probably a fad, while "Classic Coke is part of your life; you do not have to think about it twice."

  7. Quantum to classical transition in the Hořava-Lifshitz quantum cosmology

    NASA Astrophysics Data System (ADS)

    Bernardini, A. E.; Leal, P.; Bertolami, O.

    2018-02-01

    A quasi-Gaussian quantum superposition of Hořava-Lifshitz (HL) stationary states is built in order to describe the transition of the quantum cosmological problem to the related classical dynamics. The obtained HL phase-space superposed Wigner function and its associated Wigner currents describe the conditions for the matching between classical and quantum phase-space trajectories. The matching quantum superposition parameter is associated to the total energy of the classical trajectory which, at the same time, drives the engendered Wigner function to the classical stationary regime. Through the analysis of the Wigner flows, the quantum fluctuations that distort the classical regime can be quantified as a measure of (non)classicality. Finally, the modifications to the Wigner currents due to the inclusion of perturbative potentials are computed in the HL quantum cosmological context. In particular, the inclusion of a cosmological constant provides complementary information that allows for connecting the age of the Universe with the overall stiff matter density profile.

  8. Effects of Classical Background Music on Stress, Anxiety, and Knowledge of Filipino Baccalaureate Nursing Students.

    PubMed

    Evangelista, Kevin; Macabasag, Romeo Luis A; Capili, Brylle; Castro, Timothy; Danque, Marilee; Evangelista, Hanzel; Rivero, Jenica Ana; Gonong, Michell Katrina; Diño, Michael Joseph; Cajayon, Sharon

    2017-10-28

    Previous work on the use of background music suggests conflicting results in various psychological, behavioral, and educational measures. This quasi-experiment examined the effect of integrating classical background music during a lecture on stress, anxiety, and knowledge. A total of 42 nursing students participated this study. We utilized independent sample t-test and multivariate analysis of variance to examine the effect of classical background music. Our findings suggest that the presence or absence of classical background music do not affect stress, anxiety, and knowledge scores (Λ = 0.999 F(3, 78) = 0.029, p = 0.993). We provided literature to explain the non-significant result. Although classical music failed to establish a significant influence on the dependent variables, classical background music during lecture hours can be considered a non-threatening stimulus. We recommend follow up studies regarding the role of classical background music in regulating attention control of nursing students during lecture hours.

  9. The classic EDCs, phthalate esters and organochlorines, in relation to abnormal sperm quality: a systematic review with meta-analysis.

    PubMed

    Wang, Chao; Yang, Lu; Wang, Shu; Zhang, Zhan; Yu, Yongquan; Wang, Meilin; Cromie, Meghan; Gao, Weimin; Wang, Shou-Lin

    2016-01-25

    The association between endocrine disrupting chemicals (EDCs) and human sperm quality is controversial due to the inconsistent literature findings, therefore, a systematic review with meta-analysis was performed. Through the literature search and selection based on inclusion criteria, a total of 9 studies (7 cross-sectional, 1 case-control, and 1 pilot study) were analyzed for classic EDCs (5 studies for phthalate esters and 4 studies for organochlorines). Funnel plots revealed a symmetrical distribution with no evidence of publication bias (Begg's test: intercept = 0.40; p = 0.692). The summary odds ratios (OR) of human sperm quality associated with the classic EDCs was 1.67 (95% CI: 1.31-2.02). After stratification by specific chemical class, consistent increases in the risk of abnormal sperm quality were found in phthalate ester group (OR = 1.52; 95% CI: 1.09-1.95) and organochlorine group (OR = 1.98; 95% CI: 1.34-2.62). Additionally, identification of official data, and a comprehensive review of the mechanisms were performed, and better elucidated the increased risk of these classic EDCs on abnormal sperm quality. The present systematic review and meta-analysis helps to identify the impact of classic EDCs on human sperm quality. However, it still highlights the need for additional epidemiological studies in a larger variety of geographic locations.

  10. The classic EDCs, phthalate esters and organochlorines, in relation to abnormal sperm quality: a systematic review with meta-analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Lu; Wang, Shu; Zhang, Zhan; Yu, Yongquan; Wang, Meilin; Cromie, Meghan; Gao, Weimin; Wang, Shou-Lin

    2016-01-01

    The association between endocrine disrupting chemicals (EDCs) and human sperm quality is controversial due to the inconsistent literature findings, therefore, a systematic review with meta-analysis was performed. Through the literature search and selection based on inclusion criteria, a total of 9 studies (7 cross-sectional, 1 case-control, and 1 pilot study) were analyzed for classic EDCs (5 studies for phthalate esters and 4 studies for organochlorines). Funnel plots revealed a symmetrical distribution with no evidence of publication bias (Begg’s test: intercept = 0.40 p = 0.692). The summary odds ratios (OR) of human sperm quality associated with the classic EDCs was 1.67 (95% CI: 1.31-2.02). After stratification by specific chemical class, consistent increases in the risk of abnormal sperm quality were found in phthalate ester group (OR = 1.52 95% CI: 1.09-1.95) and organochlorine group (OR = 1.98 95% CI: 1.34-2.62). Additionally, identification of official data, and a comprehensive review of the mechanisms were performed, and better elucidated the increased risk of these classic EDCs on abnormal sperm quality. The present systematic review and meta-analysis helps to identify the impact of classic EDCs on human sperm quality. However, it still highlights the need for additional epidemiological studies in a larger variety of geographic locations.

  11. [The mutation analysis of PAH gene and prenatal diagnosis in classical phenylketonuria family].

    PubMed

    Yan, Yousheng; Hao, Shengju; Yao, Fengxia; Sun, Qingmei; Zheng, Lei; Zhang, Qinghua; Zhang, Chuan; Yang, Tao; Huang, Shangzhi

    2014-12-01

    To characterize the mutation spectrum of phenylalanine hydroxylase (PAH) gene and perform prenatal diagnosis for families with classical phenylketonuria. By stratified sequencing, mutations were detected in the exons and flaking introns of PAH gene of 44 families with classical phenylketonuria. 47 fetuses were diagnosed by combined sequencing with linkage analysis of three common short tandem repeats (STR) (PAH-STR, PAH-26 and PAH-32) in the PAH gene. Thirty-one types of mutations were identified. A total of 84 mutations were identified in 88 alleles (95.45%), in which the most common mutation have been R243Q (21.59%), EX6-96A>G (6.82%), IVS4-1G>A (5.86%) and IVS7+2T>A (5.86%). Most mutations were found in exons 3, 5, 6, 7, 11 and 12. The polymorphism information content (PIC) of these three STR markers was 0.71 (PAH-STR), 0.48 (PAH-26) and 0.40 (PAH-32), respectively. Prenatal diagnosis was performed successfully with the combined method in 47 fetuses of 44 classical phenylketonuria families. Among them, 11 (23.4%) were diagnosed as affected, 24 (51.1%) as carriers, and 12 (25.5%) as unaffected. Prenatal diagnosis can be achieved efficiently and accurately by stratified sequencing of PAH gene and linkage analysis of STR for classical phenylketonuria families.

  12. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    PubMed

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  13. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  14. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  15. Association Between Sluggish Cognitive Tempo Symptoms and Attentional Network and Working Memory in Primary Schoolchildren.

    PubMed

    Camprodon-Rosanas, E; Ribas-Fitó, N; Batlle, S; Persavento, C; Alvarez-Pedrerol, M; Sunyer, J; Forns, J

    2017-04-01

    Few consistent data are available in relation to the cognitive and neuropsychological processes involved in sluggish cognitive tempo (SCT) symptoms. The objective of this study was to determine the association of working memory and attentional networks with SCT symptoms in primary schoolchildren. The participants were schoolchildren aged 7 to 10 years ( n = 183) from primary schools in Catalonia (Spain). All the participants completed a working memory task (n-back) and an attentional network task (ANT). Their parents completed an SCT-Child Behavior Checklist self-report and a questionnaire concerning sociodemographic variables. Teachers of the participants provided information on ADHD symptoms and learning determinants. SCT symptoms were correlated with lower scores in both the n-back and ANT. In multivariate regression analysis, SCT symptoms were associated with slower hit reaction times from the ANT. Our results suggest that SCT symptoms are associated with a neuropsychological profile that is different from the classical ADHD profile and characterized by slower reaction times.

  16. The integrated Michaelis-Menten rate equation: déjà vu or vu jàdé?

    PubMed

    Goličnik, Marko

    2013-08-01

    A recent article of Johnson and Goody (Biochemistry, 2011;50:8264-8269) described the almost-100-years-old paper of Michaelis and Menten. Johnson and Goody translated this classic article and presented the historical perspective to one of incipient enzyme-reaction data analysis, including a pioneering global fit of the integrated rate equation in its implicit form to the experimental time-course data. They reanalyzed these data, although only numerical techniques were used to solve the model equations. However, there is also the still little known algebraic rate-integration equation in a closed form that enables direct fitting of the data. Therefore, in this commentary, I briefly present the integral solution of the Michaelis-Menten rate equation, which has been largely overlooked for three decades. This solution is expressed in terms of the Lambert W function, and I demonstrate here its use for global nonlinear regression curve fitting, as carried out with the original time-course dataset of Michaelis and Menten.

  17. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  18. Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants

    NASA Astrophysics Data System (ADS)

    Rahardiantoro, S.; Sartono, B.; Kurnia, A.

    2017-03-01

    In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.

  19. Theoretical Studies in Chemical Kinetics - Annual Report, 1970.

    DOE R&D Accomplishments Database

    Karplus, Martin

    1970-10-01

    The research performed includes (a) Alkali-Halide, Alkali-Halide (MX, M?X?) Exchange Reactions; (b) Inversion Problem; (c) Quantum Mechanics of Scattering Processes, (d) Transition State Analysis of Classical Trajectories, (e) Differential Cross Sections from Classical Trajectories; and (f) Other Studies.

  20. Statistics in biomedical laboratory and clinical science: applications, issues and pitfalls.

    PubMed

    Ludbrook, John

    2008-01-01

    This review is directed at biomedical scientists who want to gain a better understanding of statistics: what tests to use, when, and why. In my view, even during the planning stage of a study it is very important to seek the advice of a qualified biostatistician. When designing and analyzing a study, it is important to construct and test global hypotheses, rather than to make multiple tests on the data. If the latter cannot be avoided, it is essential to control the risk of making false-positive inferences by applying multiple comparison procedures. For comparing two means or two proportions, it is best to use exact permutation tests rather then the better known, classical, ones. For comparing many means, analysis of variance, often of a complex type, is the most powerful approach. The correlation coefficient should never be used to compare the performances of two methods of measurement, or two measures, because it does not detect bias. Instead the Altman-Bland method of differences or least-products linear regression analysis should be preferred. Finally, the educational value to investigators of interaction with a biostatistician, before, during and after a study, cannot be overemphasized. (c) 2007 S. Karger AG, Basel.

  1. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  2. Common acute childhood infections and appendicitis: a historical study of statistical association in 27 English public boarding schools, 1930-1934.

    PubMed

    Smallman-Raynor, M R; Cliff, A D; Ord, J K

    2010-08-01

    Although the involvement of common childhood infections in the aetiology of acute appendicitis has long been conjectured, supporting evidence is largely restricted to a disparate set of clinical case reports. A systematic population-based analysis of the implied comorbid associations is lacking in the literature. Drawing on a classic epidemiological dataset, assembled by the School Epidemics Committee of the United Kingdom's Medical Research Council (MRC) in the 1930s, this paper presents a historical analysis of the association between termly outbreaks of each of six common childhood infections (chickenpox, measles, mumps, rubella, scarlet fever and whooping cough) and operated cases of acute appendicitis in 27 English public boarding schools. When controlled for the potential confounding effects of school, year and season, multivariate negative binomial regression revealed a positive association between the level of appendicitis activity and the recorded rate of mumps (beta=0.15, 95% CI 0.07-0.24, P<0.001). Non-significant associations were identified between appendicitis and the other sample infectious diseases. Subject to data caveats, our findings suggest that further studies are required to determine whether the comorbid association between mumps and appendicitis is causal.

  3. Intravitreal dobesilate in the treatment of choroidal neovascularisation associated with age-related macular degeneration: report of two cases

    PubMed Central

    Cuevas, Pedro; Outeiriño, Luis; Azanza, Carlos; Giménez-Gallego, Guillermo

    2012-01-01

    This case report presents the effectiveness of intravitreal administration of dobesilate, a synthetic fibroblast growth factor inhibitor, in two patients showing neovascular age-related macular degeneration of the classic, and of the occult choroidal neovascularisation types, respectively. Our study demonstrates that the treatment induces the regression of both forms of this pathology, as assessed by spectral optical coherence tomography. Improvement of the lesions was accompanied of visual acuity improvement. PMID:22948997

  4. Spontaneous involution of keratoacanthoma, iconographic documentation and similarity with volcanoes of nature.

    PubMed

    Enei Gahona, Maria Leonor; Machado Filho, Carlos d' Aparecida Santos

    2012-01-01

    Through iconography, we show a case of keratoacanthoma (KA) on the nasal dorsum at two different stages of evolution (maturation and regression) and its similarity with images of the Mount St. Helens volcano and the Orcus Patera crater. Using these illustrations, we highlight why the crateriform aspect of this tumor is included in its classic clinical description. Moreover, we photographically documented the self-involuting tendency of KA, an aspect that is seldom documented in the literature.

  5. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  6. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  7. Structural Time Series Model for El Niño Prediction

    NASA Astrophysics Data System (ADS)

    Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodo, Xavier

    2015-04-01

    ENSO is a dominant feature of climate variability on inter-annual time scales destabilizing weather patterns throughout the globe, and having far-reaching socio-economic consequences. It does not only lead to extensive rainfall and flooding in some regions of the world, and anomalous droughts in others, thus ruining local agriculture, but also substantially affects the marine ecosystems and the sustained exploitation of marine resources in particular coastal zones, especially the Pacific South American coast. As a result, forecasting of ENSO and especially of the warm phase of the oscillation (El Niño/EN) has long been a subject of intense research and improvement. Thus, the present study explores a novel method for the prediction of the Niño 3.4 index. In the state-of-the-art the advantageous statistical modeling approach of Structural Time Series Analysis has not been applied. Therefore, we have developed such a model using a State Space approach for the unobserved components of the time series. Its distinguishing feature is that observations consist of various components - level, seasonality, cycle, disturbance, and regression variables incorporated as explanatory covariates. These components are aimed at capturing the various modes of variability of the N3.4 time series. They are modeled separately, then combined in a single model for analysis and forecasting. Customary statistical ENSO prediction models essentially use SST, SLP and wind stress in the equatorial Pacific. We introduce new regression variables - subsurface ocean temperature in the western equatorial Pacific, motivated by recent (Ramesh and Murtugudde, 2012) and classical research (Jin, 1997), (Wyrtki, 1985), showing that subsurface processes and heat accumulation there are fundamental for initiation of an El Niño event; and a southern Pacific temperature-difference tracer, the Rossbell dipole, leading EN by about nine months (Ballester, 2011).

  8. Robotic assisted versus pure laparoscopic surgery of the adrenal glands: a case-control study comparing surgical techniques.

    PubMed

    Morelli, Luca; Tartaglia, Dario; Bronzoni, Jessica; Palmeri, Matteo; Guadagni, Simone; Di Franco, Gregorio; Gennai, Andrea; Bianchini, Matteo; Bastiani, Luca; Moglia, Andrea; Ferrari, Vincenzo; Fommei, Enza; Pietrabissa, Andrea; Di Candio, Giulio; Mosca, Franco

    2016-11-01

    The role of the da Vinci Robotic System ® in adrenal gland surgery is not yet well defined. The goal of this study was to compare robotic-assisted surgery with pure laparoscopic surgery in a single center. One hundred and 16 patients underwent minimally invasive adrenalectomies in our department between June 1994 and December 2014, 41 of whom were treated with a robotic-assisted approach (robotic adrenalectomy, RA). Patients who underwent RA were matched according to BMI, age, gender, and nodule dimensions, and compared with 41 patients who had undergone laparoscopic adrenalectomies (LA). Statistical analysis was performed using the Student's t test for independent samples, and the relationship between the operative time and other covariates were evaluated with a multivariable linear regression model. P < 0.05 was considered significant. Mean operative time was significantly shorter in the RA group compared to the LA group. The subgroup analysis showed a shorter mean operative time in the RA group in patients with nodules ≥6 cm, BMI ≥ 30 kg/m 2 and in those who had previous abdominal surgery (p < 0.05). Results from the multiple regression model confirmed a shorter mean operative time with RA with nodules ≥6 cm (p = 0.010). Conversion rate and postoperative complications were 2.4 and 4.8 % in the LA group and 0 and 4.8 % in the RA group. In our experience, RA shows potential benefits compared to classic LA, in particular on patients with nodules ≥6 cm, BMI ≥ 30 kg/m2, and with previous abdominal surgery.

  9. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    ERIC Educational Resources Information Center

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  10. Stoichiometric network analysis and associated dimensionless kinetic equations. Application to a model of the Bray-Liebhafsky reaction.

    PubMed

    Schmitz, Guy; Kolar-Anić, Ljiljana Z; Anić, Slobodan R; Cupić, Zeljko D

    2008-12-25

    The stoichiometric network analysis (SNA) introduced by B. L. Clarke is applied to a simplified model of the complex oscillating Bray-Liebhafsky reaction under batch conditions, which was not examined by this method earlier. This powerful method for the analysis of steady-states stability is also used to transform the classical differential equations into dimensionless equations. This transformation is easy and leads to a form of the equations combining the advantages of classical dimensionless equations with the advantages of the SNA. The used dimensionless parameters have orders of magnitude given by the experimental information about concentrations and currents. This simplifies greatly the study of the slow manifold and shows which parameters are essential for controlling its shape and consequently have an important influence on the trajectories. The effectiveness of these equations is illustrated on two examples: the study of the bifurcations points and a simple sensitivity analysis, different from the classical one, more based on the chemistry of the studied system.

  11. Finite element solution of torsion and other 2-D Poisson equations

    NASA Technical Reports Server (NTRS)

    Everstine, G. C.

    1982-01-01

    The NASTRAN structural analysis computer program may be used, without modification, to solve two dimensional Poisson equations such as arise in the classical Saint Venant torsion problem. The nonhomogeneous term (the right-hand side) in the Poisson equation can be handled conveniently by specifying a gravitational load in a "structural" analysis. The use of an analogy between the equations of elasticity and those of classical mathematical physics is summarized in detail.

  12. Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis.

    PubMed

    Opriş, David; Pintea, Sebastian; García-Palacios, Azucena; Botella, Cristina; Szamosközi, Ştefan; David, Daniel

    2012-02-01

    Virtual reality exposure therapy (VRET) is a promising intervention for the treatment of the anxiety disorders. The main objective of this meta-analysis is to compare the efficacy of VRET, used in a behavioral or cognitive-behavioral framework, with that of the classical evidence-based treatments, in anxiety disorders. A comprehensive search of the literature identified 23 studies (n = 608) that were included in the final analysis. The results show that in the case of anxiety disorders, (1) VRET does far better than the waitlist control; (2) the post-treatment results show similar efficacy between the behavioral and the cognitive behavioral interventions incorporating a virtual reality exposure component and the classical evidence-based interventions, with no virtual reality exposure component; (3) VRET has a powerful real-life impact, similar to that of the classical evidence-based treatments; (4) VRET has a good stability of results over time, similar to that of the classical evidence-based treatments; (5) there is a dose-response relationship for VRET; and (6) there is no difference in the dropout rate between the virtual reality exposure and the in vivo exposure. Implications are discussed. © 2011 Wiley Periodicals, Inc.

  13. 'Four Seasons' in an animal rescue centre; classical music reduces environmental stress in kennelled dogs.

    PubMed

    Bowman, A; Scottish Spca; Dowell, F J; Evans, N P

    2015-05-01

    On admission to rescue and rehoming centres dogs are faced with a variety of short- and long-term stressors including novelty, spatial/social restriction and increased noise levels. Animate and inanimate environmental enrichment techniques have been employed within the kennel environment in an attempt to minimise stress experienced by dogs. Previous studies have shown the potential physiological and psychological benefits of auditory stimulation, particularly classical music, within the kennel environment. This study determined the physiological/psychological changes that occur when kennelled dogs are exposed to long-term (7 days) auditory stimulation in the form of classical music through assessment of effects on heart rate variability (HRV), salivary cortisol and behaviour. The study utilised a cross over design in which two groups were exposed to two consecutive 7 day treatments; silence (control) and classical music (test). Group A was studied under silent conditions followed by 7 days of test conditions during which a fixed classical music playlist was played from 10:00-16:30 h. Group B received treatment in the reverse order. Results showed that auditory stimulation induced changes in HRV and behavioural data indicative of reduced stress levels in dogs in both groups (salivary cortisol data did not show any consistent patterns of change throughout the study). Specifically, there was a significant increase in HRV parameters such as μRR, STDRR, RMSSD, pNN50, RRTI, SD1 and SD2 and a significant decrease in μHR and LF/HF from the first day of silence (S1) to the first day of music (M1). Similarly, examination of behavioural data showed that dogs in both groups spent significantly more time sitting/lying and silent and less time standing and barking during auditory stimulation. General Regression Analysis (GRA) of the change in HRV parameters from S1 to M1 revealed that male dogs responded better to auditory stimulation relative to female. Interestingly, HRV and behavioural data collected on the seventh day of music (M2) was similar to that collected on S1 suggesting that the calming effects of music are lost within the 7 days of exposure. A small '9-Day' study was conducted in attempt to determine the time-scale in which dogs become habituated to classical music and examination of the results suggests that this occurs within as soon as the second day of exposure. The results of this study show the potential of auditory stimulation as a highly effective environmental enrichment technique for kennelled dogs. However, the results also indicate the requirement for further investigations into the way in which auditory stimulation should be incorporated within the daily kennel management regime in order to harness the full physiological and psychological benefits of music. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Spatial clusters of suicide in the municipality of São Paulo 1996-2005: an ecological study.

    PubMed

    Bando, Daniel H; Moreira, Rafael S; Pereira, Julio C R; Barrozo, Ligia V

    2012-08-23

    In a classical study, Durkheim mapped suicide rates, wealth, and low family density and realized that they clustered in northern France. Assessing others variables, such as religious society, he constructed a framework for the analysis of the suicide, which still allows international comparisons using the same basic methodology. The present study aims to identify possible significantly clusters of suicide in the city of São Paulo, and then, verify their statistical associations with socio-economic and cultural characteristics. A spatial scan statistical test was performed to analyze the geographical pattern of suicide deaths of residents in the city of São Paulo by Administrative District, from 1996 to 2005. Relative risks and high and/or low clusters were calculated accounting for gender and age as co-variates, were analyzed using spatial scan statistics to identify geographical patterns. Logistic regression was used to estimate associations with socioeconomic variables, considering, the spatial cluster of high suicide rates as the response variable. Drawing from Durkheim's original work, current World Health Organization (WHO) reports and recent reviews, the following independent variables were considered: marital status, income, education, religion, and migration. The mean suicide rate was 4.1/100,000 inhabitant-years. Against this baseline, two clusters were identified: the first, of increased risk (RR=1.66), comprising 18 districts in the central region; the second, of decreased risk (RR=0.78), including 14 districts in the southern region. The downtown area toward the southwestern region of the city displayed the highest risk for suicide, and though the overall risk may be considered low, the rate climbs up to an intermediate level in this region. One logistic regression analysis contrasted the risk cluster (18 districts) against the other remaining 78 districts, testing the effects of socioeconomic-cultural variables. The following categories of proportion of persons within the clusters were identified as risk factors: singles (OR=2.36), migrants (OR=1.50), Catholics (OR=1.37) and higher income (OR=1.06). In a second logistic model, likewise conceived, the following categories of proportion of persons were identified as protective factors: married (OR=0.49) and Evangelical (OR=0.60). This risk/ protection profile is in accordance with the interpretation that, as a social phenomenon, suicide is related to social isolation. Thus, the classical framework put forward by Durkheim seems to still hold, even though its categorical expression requires re-interpretation.

  15. Using Dominance Analysis to Determine Predictor Importance in Logistic Regression

    ERIC Educational Resources Information Center

    Azen, Razia; Traxel, Nicole

    2009-01-01

    This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…

  16. High-Speed Imaging Analysis of Register Transitions in Classically and Jazz-Trained Male Voices.

    PubMed

    Dippold, Sebastian; Voigt, Daniel; Richter, Bernhard; Echternach, Matthias

    2015-01-01

    Little data are available concerning register functions in different styles of singing such as classically or jazz-trained voices. Differences between registers seem to be much more audible in jazz singing than classical singing, and so we hypothesized that classically trained singers exhibit a smoother register transition, stemming from more regular vocal fold oscillation patterns. High-speed digital imaging (HSDI) was used for 19 male singers (10 jazz-trained singers, 9 classically trained) who performed a glissando from modal to falsetto register across the register transition. Vocal fold oscillation patterns were analyzed in terms of different parameters of regularity such as relative average perturbation (RAP), correlation dimension (D2) and shimmer. HSDI observations showed more regular vocal fold oscillation patterns during the register transition for the classically trained singers. Additionally, the RAP and D2 values were generally lower and more consistent for the classically trained singers compared to the jazz singers. However, intergroup comparisons showed no statistically significant differences. Some of our results may support the hypothesis that classically trained singers exhibit a smoother register transition from modal to falsetto register. © 2015 S. Karger AG, Basel.

  17. Coupling GIS spatial analysis and Ensemble Niche Modelling to investigate climate change-related threats to the Sicilian pond turtle Emys trinacris, an endangered species from the Mediterranean.

    PubMed

    Iannella, Mattia; Cerasoli, Francesco; D'Alessandro, Paola; Console, Giulia; Biondi, Maurizio

    2018-01-01

    The pond turtle Emys trinacris is an endangered endemic species of Sicily showing a fragmented distribution throughout the main island. In this study, we applied "Ensemble Niche Modelling", combining more classical statistical techniques as Generalized Linear Models and Multivariate Adaptive Regression Splines with machine-learning approaches as Boosted Regression Trees and Maxent, to model the potential distribution of the species under current and future climatic conditions. Moreover, a "gap analysis" performed on both the species' presence sites and the predictions from the Ensemble Models is proposed to integrate outputs from these models, in order to assess the conservation status of this threatened species in the context of biodiversity management. For this aim, four "Representative Concentration Pathways", corresponding to different greenhouse gases emissions trajectories were considered to project the obtained models to both 2050 and 2070. Areas lost, gained or remaining stable for the target species in the projected models were calculated. E. trinacris ' potential distribution resulted to be significantly dependent upon precipitation-linked variables, mainly precipitation of wettest and coldest quarter. Future negative effects for the conservation of this species, because of more unstable precipitation patterns and extreme meteorological events, emerged from our analyses. Further, the sites currently inhabited by E. trinacris are, for more than a half, out of the Protected Areas network, highlighting an inadequate management of the species by the authorities responsible for its protection. Our results, therefore, suggest that in the next future the Sicilian pond turtle will need the utmost attention by the scientific community to avoid the imminent risk of extinction. Finally, the gap analysis performed in GIS environment resulted to be a very informative post-modeling technique, potentially applicable to the management of species at risk and to Protected Areas' planning in many contexts.

  18. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  19. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  20. Identifying risk sources of air contamination by polycyclic aromatic hydrocarbons.

    PubMed

    Huzlik, Jiri; Bozek, Frantisek; Pawelczyk, Adam; Licbinsky, Roman; Naplavova, Magdalena; Pondelicek, Michael

    2017-09-01

    This article is directed to determining concentrations of polycyclic aromatic hydrocarbons (PAHs), which are sorbed to solid particles in the air. Pollution sources were identified on the basis of the ratio of benzo[ghi]perylene (BghiPe) to benzo[a]pyrene (BaP). Because various important information is lost by determining the simple ratio of concentrations, least squares linear regression (classic ordinary least squares regression), reduced major axis, orthogonal regression, and Kendall-Theil robust diagnostics were utilized for identification. Statistical evaluation using all aforementioned methods demonstrated different ratios of the monitored PAHs in the intervals examined during warmer and colder periods. Analogous outputs were provided by comparing gradients of the emission factors acquired from the measured concentrations of BghiPe and BaP in motor vehicle exhaust gases. Based on these outputs, it was possible plausibly to state that the influence of burning organic fuels in heating stoves is prevalent in colder periods whereas in warmer periods transport was the exclusive source because other sources of PAH emissions were not found in the examined locations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  2. Retrospective Analysis of a Classical Biological Control Programme

    USDA-ARS?s Scientific Manuscript database

    1. Classical biological control has been a key technology in the management of invasive arthropod pests globally for over 120 years, yet rigorous quantitative evaluations of programme success or failure are rare. Here, I used life table and matrix model analyses, and life table response experiments ...

  3. A Computer-Aided Instruction Program for Teaching the TOPS20-MM Facility on the DDN (Defense Data Network)

    DTIC Science & Technology

    1988-06-01

    Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Computer Assisted Instruction; Artificial Intelligence 194...while he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been...he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been used

  4. Calories and portion sizes in recipes throughout 100 years: an overlooked factor in the development of overweight and obesity?

    PubMed

    Eidner, Maj Bloch; Lund, Anne-Sofie Qvistgaard; Harboe, Bodil Schroll; Clemmensen, Inge Haunstrup

    2013-12-01

    Large portion sizes have been associated with large energy intake, which can contribute to the development of overweight and obesity. Portion sizes of non-home cooked food have increased in the past 20 years, however, less is known about portion sizes of home-cooked food. The aim of the study was to assess if the portion sizes measured in calories in Danish cookbook recipes have changed throughout the past 100 years. Portion size measured in calories was determined by content-analysis of 21 classic Danish recipes in 13 editions of the famous Danish cookbook "Food" from 1909 to 2009. Calorie content of the recipes was determined in standard nutritional software, and the changes in calories were examined by simple linear regression analyses. Mean portion size in calories increased significantly by 21% (β = 0.63; p < 0.01) over the past 100 years in the analyzed recipes. The mean portion size in calories from a composed homemade meal increased by 77% (β = 2.88; p < 0.01). The mean portion size in calories from meat increased by 27% (β = 0.85; p = 0.03), starchy products increased by 148% (β = 1.28; p < 0.01), vegetables increased by 37% (β = 0.21; p = 0.13) and sauce increased by 47% (β = 0.56; p = 0.02) throughout the years. Portion sizes measured in calories in classical Danish recipes have increased significantly in the past 100 years and can be an important factor in increased energy intake and the risk of developing overweight and obesity.

  5. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    PubMed

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Autoimmune and Atopic Disorders and Risk of Classical Hodgkin Lymphoma

    PubMed Central

    Hollander, Peter; Rostgaard, Klaus; Smedby, Karin E.; Chang, Ellen T.; Amini, Rose-Marie; de Nully Brown, Peter; Glimelius, Bengt; Adami, Hans-Olov; Melbye, Mads; Glimelius, Ingrid; Hjalgrim, Henrik

    2015-01-01

    Results from previous investigations have shown associations between the risk of Hodgkin lymphoma (HL) and a history of autoimmune and atopic diseases, but it remains unknown whether these associations apply to all types of HL or only to specific subtypes. We investigated immune diseases and the risk of classical HL in a population-based case-control study that included 585 patients and 3,187 controls recruited from October 1999 through August 2002. We collected information on immune diseases through telephone interviews and performed serological analyses of specific immunoglobulin E reactivity. Tumor Epstein-Barr virus (EBV) status was determined for 498 patients. Odds ratios with 95% confidence intervals were calculated using logistic regression analysis. Rheumatoid arthritis was associated with a higher risk of HL (odds ratio (OR) = 2.63; 95% confidence interval (CI): 1.47, 4.70), especially EBV-positive HL (OR = 3.18; 95% CI: 1.23, 8.17), and with mixed-cellularity HL (OR = 4.25; 95% CI: 1.66, 10.90). HL risk was higher when we used proxies of severe rheumatoid arthritis, such as ever having received daily rheumatoid arthritis medication (OR = 3.98; 95% CI: 2.08, 7.62), rheumatoid arthritis duration of 6–20 years (OR = 3.80; 95% CI: 1.72, 8.41), or ever having been hospitalized for rheumatoid arthritis (OR = 7.36; 95% CI: 2.95, 18.38). Atopic diseases were not associated with the risk of HL. EBV replication induced by chronic inflammation in patients with autoimmune diseases might explain the higher risk of EBV-positive HL. PMID:26346543

  7. Clinical factors associated with classical symptoms of aortic valve stenosis.

    PubMed

    Nishizaki, Yuji; Daimon, Masao; Miyazaki, Sakiko; Suzuki, Hiromasa; Kawata, Takayuki; Miyauchi, Katsumi; Chiang, Shuo-Ju; Makinae, Haruka; Shinozaki, Tomohiro; Daida, Hiroyuki

    2013-05-01

    The recognition of clinical symptoms is critical to a therapeutic strategy for aortic valve stenosis (AS). It was hypothesized that AS symptoms might have multiple causes; hence, a study was conducted to investigate the factors that separately influence the classic symptoms of dyspnea, angina and syncope in AS. The medical records of 170 consecutive patients with AS (> or = moderate grade) were reviewed. A multivariate logistic regression analysis was used to evaluate the hemodynamic and clinical factors that separately influence the development of three clinical symptoms: dyspnea (defined as NYHA class > or = 2), angina, and syncope. The most common symptom was dyspnea (47.1%), followed by angina (12.4%) and syncope (4.7%). The factors associated with dyspnea were a higher e' ratio (p = 0.04) and peak aortic valve velocity (p = 0.01). Only the severity of AS was associated with syncope. The presence of hypertension was associated with angina (p = 0.04). Moreover, coronary angiography was performed in 59 patients before aortic valve replacement and revealed coronary stenosis (> 50% diameter stenosis) in 11/16 patients (69%) that had angina. The presence of coronary stenosis was significantly associated with angina (p = 0.02). The development of dyspnea, angina or syncope was influenced by different factors in AS. Dyspnea and syncope were mainly associated with AS severity, and diastolic dysfunction also influenced dyspnea. In contrast, angina was mainly related to the presence of coronary stenosis rather than to AS severity. These factors should be considered when, selecting a therapeutic strategy for AS patients in the modern era.

  8. Classical Limit and Quantum Logic

    NASA Astrophysics Data System (ADS)

    Losada, Marcelo; Fortin, Sebastian; Holik, Federico

    2018-02-01

    The analysis of the classical limit of quantum mechanics usually focuses on the state of the system. The general idea is to explain the disappearance of the interference terms of quantum states appealing to the decoherence process induced by the environment. However, in these approaches it is not explained how the structure of quantum properties becomes classical. In this paper, we consider the classical limit from a different perspective. We consider the set of properties of a quantum system and we study the quantum-to-classical transition of its logical structure. The aim is to open the door to a new study based on dynamical logics, that is, logics that change over time. In particular, we appeal to the notion of hybrid logics to describe semiclassical systems. Moreover, we consider systems with many characteristic decoherence times, whose sublattices of properties become distributive at different times.

  9. Playing-related disabling musculoskeletal disorders in young and adult classical piano students.

    PubMed

    Bruno, S; Lorusso, A; L'Abbate, N

    2008-07-01

    To determine the prevalence of instrument-related musculoskeletal problems in classical piano students and investigate piano-specific risk factors. A specially developed four parts questionnaire was administered to classical piano students of two Apulian conservatories, in southern Italy. A cross-sectional design was used. Prevalences of playing related musculoskeletal disorders (MSDs) were calculated and cases were compared with non-cases. A total of 195 out of the 224 piano students responded (87%). Among 195 responders, 75 (38.4%) were considered affected according to the pre-established criteria. Disabling MSDs showed similar prevalence rates for neck (29.3%), thoracic spine (21.3%) and upper limbs (from 20.0 to 30.4%) in the affected group. Univariate analyses showed statistical differences concerning mean age, number of hours per week spent playing, more than 60 min of continuative playing without breaks, lack of sport practice and acceptability of "No pain, no gain" criterion in students with music-related pain compared with pianists not affected. Statistical correlation was found only between upper limbs diseases in pianists and hand sizes. No correlation with the model of piano played was found in the affected group. The multivariate analyses performed by logistic regression confirmed the independent correlation of the risk factors age, lack of sport practice and acceptability of "No pain, no gain" criterion. Our study showed MSDs to be a common problem among classical piano students. With variance in several studies reported, older students appeared to be more frequently affected by disabling MSDs and no difference in the prevalence rate of the disorders was found in females.

  10. A new model-free index of dynamic cerebral blood flow autoregulation.

    PubMed

    Chacón, Max; Jara, José Luis; Panerai, Ronney B

    2014-01-01

    The classic dynamic autoregulatory index (ARI), proposed by Aaslid and Tiecks, is one of the most widely used methods to assess the efficiency of dynamic cerebral autoregulation. Although this index is often used in clinical research and is also included in some commercial equipment, it exhibits considerable intra-subject variability, and has the tendency to produce false positive results in clinical applications. An alternative index of dynamic cerebral autoregulation is proposed, which overcomes most of the limitations of the classic method and also has the advantage of being model-free. This new index uses two parameters that are obtained directly from the response signal of the cerebral blood flow velocity to a transient decrease in arterial blood pressure provoked by the sudden release of bilateral thigh cuffs, and a third parameter measuring the difference in slope of this response and the change in arterial blood pressure achieved. With the values of these parameters, a corresponding classic autoregulatory index value could be calculated by using a linear regression model built from theoretical curves generated with the Aaslid-Tiecks model. In 16 healthy subjects who underwent repeated thigh-cuff manoeuvres, the model-free approach exhibited significantly lower intra-subject variability, as measured by the unbiased coefficient of variation, than the classic autoregulatory index (p = 0.032) and the Rate of Return (p<0.001), another measure of cerebral autoregulation used for this type of systemic pressure stimulus, from 39.23%±41.91% and 55.31%±31.27%, respectively, to 15.98%±7.75%.

  11. A New Model-Free Index of Dynamic Cerebral Blood Flow Autoregulation

    PubMed Central

    Chacón, Max; Jara, José Luis; Panerai, Ronney B.

    2014-01-01

    The classic dynamic autoregulatory index (ARI), proposed by Aaslid and Tiecks, is one of the most widely used methods to assess the efficiency of dynamic cerebral autoregulation. Although this index is often used in clinical research and is also included in some commercial equipment, it exhibits considerable intra-subject variability, and has the tendency to produce false positive results in clinical applications. An alternative index of dynamic cerebral autoregulation is proposed, which overcomes most of the limitations of the classic method and also has the advantage of being model-free. This new index uses two parameters that are obtained directly from the response signal of the cerebral blood flow velocity to a transient decrease in arterial blood pressure provoked by the sudden release of bilateral thigh cuffs, and a third parameter measuring the difference in slope of this response and the change in arterial blood pressure achieved. With the values of these parameters, a corresponding classic autoregulatory index value could be calculated by using a linear regression model built from theoretical curves generated with the Aaslid-Tiecks model. In 16 healthy subjects who underwent repeated thigh-cuff manoeuvres, the model-free approach exhibited significantly lower intra-subject variability, as measured by the unbiased coefficient of variation, than the classic autoregulatory index (p = 0.032) and the Rate of Return (p<0.001), another measure of cerebral autoregulation used for this type of systemic pressure stimulus, from 39.23%±41.91% and 55.31%±31.27%, respectively, to 15.98%±7.75%. PMID:25313519

  12. Fundamental theories of waves and particles formulated without classical mass

    NASA Astrophysics Data System (ADS)

    Fry, J. L.; Musielak, Z. E.

    2010-12-01

    Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.

  13. Classical confinement and outward convection of impurity ions in the MST RFP

    NASA Astrophysics Data System (ADS)

    Kumar, S. T. A.; Den Hartog, D. J.; Mirnov, V. V.; Caspary, K. J.; Magee, R. M.; Brower, D. L.; Chapman, B. E.; Craig, D.; Ding, W. X.; Eilerman, S.; Fiksel, G.; Lin, L.; Nornberg, M.; Parke, E.; Reusch, J. A.; Sarff, J. S.

    2012-05-01

    Impurity ion dynamics measured with simultaneously high spatial and temporal resolution reveal classical ion transport in the reversed-field pinch. The boron, carbon, oxygen, and aluminum impurity ion density profiles are obtained in the Madison Symmetric Torus [R. N. Dexter et al., Fusion Technol. 19, 131 (1991)] using a fast, active charge-exchange-recombination-spectroscopy diagnostic. Measurements are made during improved-confinement plasmas obtained using inductive control of tearing instability to mitigate stochastic transport. At the onset of the transition to improved confinement, the impurity ion density profile becomes hollow, with a slow decay in the core region concurrent with an increase in the outer region, implying an outward convection of impurities. Impurity transport from Coulomb collisions in the reversed-field pinch is classical for all collisionality regimes, and analysis shows that the observed hollow profile and outward convection can be explained by the classical temperature screening mechanism. The profile agrees well with classical expectations. Experiments performed with impurity pellet injection provide further evidence for classical impurity ion confinement.

  14. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  15. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery

    PubMed Central

    Czyż, Stanisław H.; Moss, Sarah J.

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill. PMID:27547196

  16. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery.

    PubMed

    Czyż, Stanisław H; Moss, Sarah J

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill.

  17. Extensions and applications of ensemble-of-trees methods in machine learning

    NASA Astrophysics Data System (ADS)

    Bleich, Justin

    Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of violence during probation hearings in court systems.

  18. Reimer through Confucian Lenses: Resonances with Classical Chinese Aesthetics

    ERIC Educational Resources Information Center

    Tan, Leonard

    2015-01-01

    In this paper, I compare all three editions of Bennett Reimer's "A Philosophy of Music Education" with early Chinese philosophy, in particular, classical Chinese aesthetics. I structure my analysis around a quartet of interrelated themes: aesthetic education, education of feeling, aesthetic experience, and ethics and aesthetics. This…

  19. The Meaning of General Education: The Emergence of a Curriculum Paradigm.

    ERIC Educational Resources Information Center

    Miller, Gary E.

    An historical and conceptual analysis of general education in the United States is presented, comprising the following chapters: (1) transformation and the search for meaning (including a discussion of the concept of general education); (2) the classical curriculum confronts democracy (democratic pressures on the classical curriculum; the Yale…

  20. In-Service Training of Teachers as Behavior Modifiers: Review and Analysis.

    ERIC Educational Resources Information Center

    Eachus, Herbert Todd

    The basic principles of operant and classical conditioning are presented, and their applications for the in-service training of teachers are discussed. Certain classroom behaviors are analyzed and applied to the classic stimulus-response paradigm. Activities are generically classified as positive or negative reinforcers and these reinforcers, in…

  1. Regression: The Apple Does Not Fall Far From the Tree.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  2. Serum metabolites and risk of myocardial infarction and ischemic stroke: a targeted metabolomic approach in two German prospective cohorts.

    PubMed

    Floegel, Anna; Kühn, Tilman; Sookthai, Disorn; Johnson, Theron; Prehn, Cornelia; Rolle-Kampczyk, Ulrike; Otto, Wolfgang; Weikert, Cornelia; Illig, Thomas; von Bergen, Martin; Adamski, Jerzy; Boeing, Heiner; Kaaks, Rudolf; Pischon, Tobias

    2018-01-01

    Metabolomic approaches in prospective cohorts may offer a unique snapshot into early metabolic perturbations that are associated with a higher risk of cardiovascular diseases (CVD) in healthy people. We investigated the association of 105 serum metabolites, including acylcarnitines, amino acids, phospholipids and hexose, with risk of myocardial infarction (MI) and ischemic stroke in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam (27,548 adults) and Heidelberg (25,540 adults) cohorts. Using case-cohort designs, we measured metabolites among individuals who were free of CVD and diabetes at blood draw but developed MI (n = 204 and n = 228) or stroke (n = 147 and n = 121) during follow-up (mean, 7.8 and 7.3 years) and among randomly drawn subcohorts (n = 2214 and n = 770). We used Cox regression analysis and combined results using meta-analysis. Independent of classical CVD risk factors, ten metabolites were associated with risk of MI in both cohorts, including sphingomyelins, diacyl-phosphatidylcholines and acyl-alkyl-phosphatidylcholines with pooled relative risks in the range of 1.21-1.40 per one standard deviation increase in metabolite concentrations. The metabolites showed positive correlations with total- and LDL-cholesterol (r ranged from 0.13 to 0.57). When additionally adjusting for total-, LDL- and HDL-cholesterol, triglycerides and C-reactive protein, acyl-alkyl-phosphatidylcholine C36:3 and diacyl-phosphatidylcholines C38:3 and C40:4 remained associated with risk of MI. When added to classical CVD risk models these metabolites further improved CVD prediction (c-statistics increased from 0.8365 to 0.8384 in EPIC-Potsdam and from 0.8344 to 0.8378 in EPIC-Heidelberg). None of the metabolites was consistently associated with stroke risk. Alterations in sphingomyelin and phosphatidylcholine metabolism, and particularly metabolites of the arachidonic acid pathway are independently associated with risk of MI in healthy adults.

  3. From appearance to essence: 10 years review of atypical amniotic fluid embolism.

    PubMed

    Shen, Fangrong; Wang, Lu; Yang, Weiwen; Chen, Youguo

    2016-02-01

    Amniotic fluid embolism (AFE) is an unpredictable and unpreventable complication of maternity. The presentation may range from relatively subtle clinical events to sudden maternal cardiac arrest. However, the neglected diagnosis of non-classical form of AFE (atypical AFE) is very common. The aim of this study was to examine population-based regional data from Suzhou, China. Based on the analysis of all available case reports, we put forward an outline of atypical AFE and investigate whether any variation identified could be ascribed to methodology. Retrospective study from January 2004 to December 2013, 53 cases was identified from the database of Center for Disease Control (CDC) in the city of Suzhou. We investigated the presentations of atypical AFE and maternal characteristics with potential factors underlying AFE. Multiple-regression analysis was used to calculate adjusted odds ratios (ORs) and 95 % confidence intervals (CIs). The incidence of AFE was 6.91 per 100,000 deliveries (53/766,895). Seventeen deaths occurred, a mortality rate of 32 %. Atypical AFE may as the earlier stage or mild form of AFE, there was no death case in the study with timely remedy. The atypical AFE appear is obstetric hemorrhage and/or pulmonary and renal dysfunction postpartum. Hyperfibrinolysis and coagulopathy may the early laboratory findings of atypical AFE. Atypical and classical AFE shared the same risks, such as advanced maternal age, placental abnormalities, operative deliveries, eclampsia, cervical lacerations, and induction of labor. Staying alert to premonitory symptoms of AFE is critical to turn it to a remediable disease. Patient complaints such as breathlessness, chest pain, feeling cold, distress, panic, a feeling of nausea, and vomiting should elicit close attention. The management of a suspected episode of amniotic fluid embolism is generally considered to be supportive. Hysterectomy must be performed if there is further progression of symptoms. Due to advances in acute care, mortality has decreased in recent years, highlighting the importance of early detection and treatment.

  4. A comparison of cone-beam computed tomography and direct measurement in the examination of the mandibular canal and adjacent structures.

    PubMed

    Kim, Thomas S; Caruso, Joseph M; Christensen, Heidi; Torabinejad, Mahmoud

    2010-07-01

    The purpose of this investigation was to assess the ability of cone-beam computed tomography (CBCT) scanning to measure distances from the apices of selected posterior teeth to the mandibular canal. Measurements were taken from the apices of all posterior teeth that were superior to the mandibular canal. A pilot study was performed to determine the scanning parameters that produced the most diagnostic image and the best dissection technique. Twelve human hemimandibles with posterior teeth were scanned at .20 voxels on an I-CAT Classic CBCT device (Imaging Sciences International, Hatfield, PA), and the scans were exported in Digital Imaging and Communications in Medicine (DICOM) format. The scans were examined in InVivo Dental software (Anatomage, San Jose, CA), and measurements were taken from the apex of each root along its long axis to the upper portion of the mandibular canal. The specimens were dissected under a dental operating microscope, and analogous direct measurements were taken with a Boley gauge. All measurements were taken in triplicate at least 1 week apart by one individual (TSK). The results were averaged and the data separated into matching pairs for statistical analysis. There was no statistical difference (alpha = .05) between the methods of measurement according to the Wilcoxon matched pairs test (p = 0.676). For the anatomic measurements, the intra-rater correlation coefficient (ICC) was .980 and for the CBCT it was .949, indicating that both methods were highly reproducible. Both measurement methods were highly predictive of and highly correlated to each other according to regression and correlation analysis, respectively. Based on the results of this study, the I-CAT Classic can be used to measure distances from the apices of the posterior teeth to the mandibular canal as accurately as direct anatomic dissection. Copyright 2010 American Association of Endodontists. All rights reserved.

  5. Contact stresses in gear teeth: A new method of analysis

    NASA Technical Reports Server (NTRS)

    Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.

    1991-01-01

    A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.

  6. Novel cystathionine β-synthase gene mutations in a Filipino patient with classic homocystinuria.

    PubMed

    Silao, Catherine Lynn T; Fabella, Terence Diane F; Rama, Kahlil Izza D; Estrada, Sylvia C

    2015-10-01

    Classic homocystinuria due to cystathionine β-synthase (CBS) deficiency is an autosomal recessive disorder of sulfur metabolism. Clinical manifestations include mental retardation, dislocation of the optic lens (ectopia lentis), skeletal abnormalities and a tendency to thromboembolic episodes. We present the first mutational analysis of CBS in a Filipino patient with classic homocystinuria. Genomic DNA was extracted from peripheral blood collected from a diagnosed Filipino patient with classic homocystinuria. The entire coding region of CBS (17 exons) was amplified using polymerase chain reaction and bidirectionally sequenced using standard protocols. The patient was found to be compound heterozygous for two novel mutations, g.13995G>A [c.982G>A; p.D328K] and g.15860-15868dupGCAGGAGCT [c.1083-1091dupGCAGGAGCT; p. Q362-L364dupQEL]. Four known single-nucleotide polymorphisms (rs234706, rs1801181, rs706208 and rs706209) were also detected in the present patient's CBS. The patient was heterozygous for all the identified alleles. This is the first mutational analysis of CBS done in a Filipino patient with classic homocystinuria who presented with a novel duplication mutation and a novel missense mutation. Homocystinuria due to CBS deficiency is a heterogeneous disorder at the molecular level. © 2015 Japan Pediatric Society.

  7. Classical metaphyseal lesions thought to be pathognomonic of child abuse are often artifacts or indicative of metabolic bone disease.

    PubMed

    Miller, Marvin; Mirkin, L David

    2018-06-01

    The objective of the present study was to review the histopathology in the original articles by authors Kleinman and Marks that described the specificity of the classical metaphyseal lesion for child abuse and to determine if there were any oversights in the authors' analysis. We reviewed the histopathology of the original studies that equated the classical metaphyseal lesion with child abuse. We compared this with the histopathology of metaphyseal fractures caused by known accidental, severe trauma in children and reviewed the histopathology of artifacts that can sometimes be produced in bone histology preparations. Acute classical metaphyseal lesions showed no hemorrhage, and the chronic classical metaphyseal showed islands of cartilage proliferation at the metaphyses and growth plate, findings consistent with rickets and other metabolic bone disorders. Some of the acute metaphyseal lesions were consistent with artifacts. We believe the original studies that equate the classical metaphyseal lesion with child abuse are flawed. The most compelling observation that challenges the histopathology of the classical metaphyseal lesion as being a fracture is the absence of hemorrhage in the acute classical metaphyseal lesion. We hypothesize that some of the classical metaphyseal lesions were artifacts or represent metabolic bone disorders that were not considered and that these two non-traumatic explanations may have been the basis of the abnormal bone findings. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective

    NASA Astrophysics Data System (ADS)

    Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.

    2018-05-01

    Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.

  10. The effect of alcohol, tobacco and caffeine consumption and vegetarian diet on gallstone prevalence.

    PubMed

    Walcher, Thomas; Haenle, Mark Martin; Mason, Richard Andrew; Koenig, Wolfgang; Imhof, Armin; Kratzer, Wolfgang

    2010-11-01

    To investigate the effects of alcohol, tobacco and caffeine consumption and of vegetarian diet on gallstone prevalence in an urban population sample. A total of 2417 individuals underwent ultrasound examination and completed a standardized questionnaire as part of the EMIL study. Statistical analysis of the data considered the known risk factors of age, female sex, BMI, positive family history and potential confounders, such as alcohol, caffeine and tobacco consumption and vegetarian diet using multiple logistic regression with variable selection. The prevalence of gallstones in the population sample was 8% (171 out of 2147). Findings of the study confirmed the classic risk factors of age, female sex, obesity and positive family history. After the variable selection of potential risk factors in a logistic regression that was adjusted for age, female sex, BMI and positive family history, the factors like tobacco [odds ratio (OR) 1.09, 95% confidence interval (CI): 0.76-1.56, P=0.64] and caffeine consumption (OR: 0.77, 95% CI: 0.42-1.42, P=0.40) as well as vegetarian diet (OR: 1.14, 95% CI: 0.39-3.35, P=0.81) had no effect on gallstone prevalence. A protective effect against development of gallstones was shown for alcohol consumption (OR: 0.67, 95% CI: 0.46-0.99, P=0.04). The factors like tobacco and caffeine consumption as well as vegetarian diet exerted no measurable effect on the prevalence of gallstones. A protective effect was found for alcohol consumption.

  11. Differences in Kaposi sarcoma-associated herpesvirus-specific and herpesvirus-non-specific immune responses in classic Kaposi sarcoma cases and matched controls in Sicily.

    PubMed

    Amodio, Emanuele; Goedert, James J; Barozzi, Patrizia; Riva, Giovanni; Firenze, Alberto; Bonura, Filippa; Viviano, Enza; Romano, Nino; Luppi, Mario

    2011-10-01

    Kaposi sarcoma (KS) might develop because of incompetent immune responses, both non-specifically and specifically against the KS-associated herpesvirus (KSHV). Peripheral blood mononuclear cells from 15 classic (non-AIDS) KS cases, 13 KSHV seropositives (without KS) and 15 KSHV-seronegative controls were tested for interferon-γ T-cell (enzyme-linked immunospot [Elispot]) responses to KSHV-latency-associated nuclear antigen (LANA), KSHV-K8.1 and CMV/Epstein-Barr virus (EBV) peptide pools. The forearm and thigh of each participant was also tested for delayed-type hypersensitivity (DTH) against common recall antigens. Groups were compared with Fisher exact test and multinomial logistic regression to calculate odds ratios (OR) and 95% confidence intervals (CI). A KSHV Elispot response was detected in 10 (67%) classic KS cases, 11 (85%) KSHV seropositives (without KS) and two (13%) seronegative controls. All four cases with KSHV-LANA responses had current KS lesions, whereas five of six cases with KSHV-K8.1 responses had no lesions (P = 0.048). No case responded to both LANA and K8.1. Compared with the seronegative controls, the risk for classic KS was inversely related to DTH in the thigh (OR 0.71, 95% CI 0.55-0.94, P = 0.01), directly associated with DTH in the forearm (OR 1.35, 95% CI 1.02-1.80, P = 0.04) and tended to be increased fivefold per KSHV Elispot response (OR 5.13, 95% CI 0.86-30.77, P = 0.07). Compared with KSHV seropositives (without KS), the risk for classic KS was reduced fivefold (OR 0.20, CI 0.03-0.77, P = 0.04) per KSHV response. The CMV/EBV Elispot responses were irrelevant. Deficiency of both KSHV-specific and KSHV-non-specific immunity is associated with classic KS. This might clarify why Kaposi sarcoma responds to immune reconstitution. © 2011 Japanese Cancer Association and this article is a US Government work and is in the public domain in the USA.

  12. Applied Multiple Linear Regression: A General Research Strategy

    ERIC Educational Resources Information Center

    Smith, Brandon B.

    1969-01-01

    Illustrates some of the basic concepts and procedures for using regression analysis in experimental design, analysis of variance, analysis of covariance, and curvilinear regression. Applications to evaluation of instruction and vocational education programs are illustrated. (GR)

  13. Spatial and temporal epidemiological analysis in the Big Data era.

    PubMed

    Pfeiffer, Dirk U; Stevens, Kim B

    2015-11-01

    Concurrent with global economic development in the last 50 years, the opportunities for the spread of existing diseases and emergence of new infectious pathogens, have increased substantially. The activities associated with the enormously intensified global connectivity have resulted in large amounts of data being generated, which in turn provides opportunities for generating knowledge that will allow more effective management of animal and human health risks. This so-called Big Data has, more recently, been accompanied by the Internet of Things which highlights the increasing presence of a wide range of sensors, interconnected via the Internet. Analysis of this data needs to exploit its complexity, accommodate variation in data quality and should take advantage of its spatial and temporal dimensions, where available. Apart from the development of hardware technologies and networking/communication infrastructure, it is necessary to develop appropriate data management tools that make this data accessible for analysis. This includes relational databases, geographical information systems and most recently, cloud-based data storage such as Hadoop distributed file systems. While the development in analytical methodologies has not quite caught up with the data deluge, important advances have been made in a number of areas, including spatial and temporal data analysis where the spectrum of analytical methods ranges from visualisation and exploratory analysis, to modelling. While there used to be a primary focus on statistical science in terms of methodological development for data analysis, the newly emerged discipline of data science is a reflection of the challenges presented by the need to integrate diverse data sources and exploit them using novel data- and knowledge-driven modelling methods while simultaneously recognising the value of quantitative as well as qualitative analytical approaches. Machine learning regression methods, which are more robust and can handle large datasets faster than classical regression approaches, are now also used to analyse spatial and spatio-temporal data. Multi-criteria decision analysis methods have gained greater acceptance, due in part, to the need to increasingly combine data from diverse sources including published scientific information and expert opinion in an attempt to fill important knowledge gaps. The opportunities for more effective prevention, detection and control of animal health threats arising from these developments are immense, but not without risks given the different types, and much higher frequency, of biases associated with these data. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    NASA Astrophysics Data System (ADS)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  15. Emotional connotations of words related to authority and community.

    PubMed

    Schauenburg, Gesche; Ambrasat, Jens; Schröder, Tobias; von Scheve, Christian; Conrad, Markus

    2015-09-01

    We present a database of 858 German words from the semantic fields of authority and community, which represent core dimensions of human sociality. The words were selected on the basis of co-occurrence profiles of representative keywords for these semantic fields. All words were rated along five dimensions, each measured by a bipolar semantic-differential scale: Besides the classic dimensions of affective meaning (valence, arousal, and potency), we collected ratings of authority and community with newly developed scales. The results from cluster, correlational, and multiple regression analyses on the rating data suggest a robust negativity bias for authority valuation among German raters recruited via university mailing lists, whereas community ratings appear to be rather unrelated to the well-established affective dimensions. Furthermore, our data involve a strong overall negative correlation-rather than the classical U-shaped distribution-between valence and arousal for socially relevant concepts. Our database provides a valuable resource for research questions at the intersection of cognitive neuroscience and social psychology. It can be downloaded as supplemental materials with this article.

  16. Line mixing effects in isotropic Raman spectra of pure N{sub 2}: A classical trajectory study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanov, Sergey V., E-mail: serg.vict.ivanov@gmail.com; Boulet, Christian; Buzykin, Oleg G.

    2014-11-14

    Line mixing effects in the Q branch of pure N{sub 2} isotropic Raman scattering are studied at room temperature using a classical trajectory method. It is the first study using an extended modified version of Gordon's classical theory of impact broadening and shift of rovibrational lines. The whole relaxation matrix is calculated using an exact 3D classical trajectory method for binary collisions of rigid N{sub 2} molecules employing the most up-to-date intermolecular potential energy surface (PES). A simple symmetrizing procedure is employed to improve off-diagonal cross-sections to make them obeying exactly the principle of detailed balance. The adequacy of themore » results is confirmed by the sum rule. The comparison is made with available experimental data as well as with benchmark fully quantum close coupling [F. Thibault, C. Boulet, and Q. Ma, J. Chem. Phys. 140, 044303 (2014)] and refined semi-classical Robert-Bonamy [C. Boulet, Q. Ma, and F. Thibault, J. Chem. Phys. 140, 084310 (2014)] results. All calculations (classical, quantum, and semi-classical) were made using the same PES. The agreement between classical and quantum relaxation matrices is excellent, opening the way to the analysis of more complex molecular systems.« less

  17. Application of ply level analysis to flexural wave propagation

    NASA Astrophysics Data System (ADS)

    Valisetty, R. R.; Rehfield, L. W.

    1988-10-01

    A brief survey is presented of the shear deformation theories of laminated plates. It indicates that there are certain non-classical influences that affect bending-related behavior in the same way as do the transverse shear stresses. They include bending- and stretching-related section warping and the concomitant non-classical surface parallel stress contributions and the transverse normal stress. A bending theory gives significantly improved performance if these non-classical affects are incorporated. The heterogeneous shear deformations that are characteristic of laminates with highly dissimilar materials, however, require that attention be paid to the modeling of local rotations. In this paper, it is shown that a ply level analysis can be used to model such disparate shear deformations. Here, equilibrium of each layer is analyzed separately. Earlier applications of this analysis include free-edge laminate stresses. It is now extended to the study of flexural wave propagation in laminates. A recently developed homogeneous plate theory is used as a ply level model. Due consideration is given to the non-classical influences and no shear correction factors are introduced extraneously in this theory. The results for the lowest flexural mode of travelling planar harmonic waves indicate that this approach is competitive and yields better results for certain laminates.

  18. External Tank Liquid Hydrogen (LH2) Prepress Regression Analysis Independent Review Technical Consultation Report

    NASA Technical Reports Server (NTRS)

    Parsons, Vickie s.

    2009-01-01

    The request to conduct an independent review of regression models, developed for determining the expected Launch Commit Criteria (LCC) External Tank (ET)-04 cycle count for the Space Shuttle ET tanking process, was submitted to the NASA Engineering and Safety Center NESC on September 20, 2005. The NESC team performed an independent review of regression models documented in Prepress Regression Analysis, Tom Clark and Angela Krenn, 10/27/05. This consultation consisted of a peer review by statistical experts of the proposed regression models provided in the Prepress Regression Analysis. This document is the consultation's final report.

  19. Predicting the Trends of Social Events on Chinese Social Media.

    PubMed

    Zhou, Yang; Zhang, Lei; Liu, Xiaoqian; Zhang, Zhen; Bai, Shuotian; Zhu, Tingshao

    2017-09-01

    Growing interest in social events on social media came along with the rapid development of the Internet. Social events that occur in the "real" world can spread on social media (e.g., Sina Weibo) rapidly, which may trigger severe consequences and thus require the government's timely attention and responses. This article proposes to predict the trends of social events on Sina Weibo, which is currently the most popular social media in China. Based on the theories of social psychology and communication sciences, we extract an unprecedented amount of comprehensive and effective features that relate to the trends of social events on Chinese social media, and we construct the trends of prediction models by using three classical regression algorithms. We found that lasso regression performed better with the precision 0.78 and the recall 0.88. The results of our experiments demonstrated the effectiveness of our proposed approach.

  20. Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective

    PubMed Central

    Qian, Xiaoning; Dougherty, Edward R.

    2017-01-01

    The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268

  1. Simultaneous high-speed schlieren and OH chemiluminescence imaging in a hybrid rocket combustor at elevated pressures

    NASA Astrophysics Data System (ADS)

    Miller, Victor; Jens, Elizabeth T.; Mechentel, Flora S.; Cantwell, Brian J.; Stanford Propulsion; Space Exploration Group Team

    2014-11-01

    In this work, we present observations of the overall features and dynamics of flow and combustion in a slab-type hybrid rocket combustor. Tests were conducted in the recently upgraded Stanford Combustion Visualization Facility, a hybrid rocket combustor test platform capable of generating constant mass-flux flows of oxygen. High-speed (3 kHz) schlieren and OH chemiluminescence imaging were used to visualize the flow. We present imaging results for the combustion of two different fuel grains, a classic, low regression rate polymethyl methacrylate (PMMA), and a high regression rate paraffin, and all tests were conducted in gaseous oxygen. Each fuel grain was tested at multiple free-stream pressures at constant oxidizer mass flux (40 kg/m2s). The resulting image sequences suggest that aspects of the dynamics and scaling of the system depend strongly on both pressure and type of fuel.

  2. Characterizing quantum channels with non-separable states of classical light

    NASA Astrophysics Data System (ADS)

    Ndagano, Bienvenu; Perez-Garcia, Benjamin; Roux, Filippus S.; McLaren, Melanie; Rosales-Guzman, Carmelo; Zhang, Yingwen; Mouane, Othmane; Hernandez-Aranda, Raul I.; Konrad, Thomas; Forbes, Andrew

    2017-04-01

    High-dimensional entanglement with spatial modes of light promises increased security and information capacity over quantum channels. Unfortunately, entanglement decays due to perturbations, corrupting quantum links that cannot be repaired without performing quantum tomography on the channel. Paradoxically, the channel tomography itself is not possible without a working link. Here we overcome this problem with a robust approach to characterize quantum channels by means of classical light. Using free-space communication in a turbulent atmosphere as an example, we show that the state evolution of classically entangled degrees of freedom is equivalent to that of quantum entangled photons, thus providing new physical insights into the notion of classical entanglement. The analysis of quantum channels by means of classical light in real time unravels stochastic dynamics in terms of pure state trajectories, and thus enables precise quantum error correction in short- and long-haul optical communication, in both free space and fibre.

  3. A framework for evaluating mixture analysis algorithms

    NASA Astrophysics Data System (ADS)

    Dasaratha, Sridhar; Vignesh, T. S.; Shanmukh, Sarat; Yarra, Malathi; Botonjic-Sehic, Edita; Grassi, James; Boudries, Hacene; Freeman, Ivan; Lee, Young K.; Sutherland, Scott

    2010-04-01

    In recent years, several sensing devices capable of identifying unknown chemical and biological substances have been commercialized. The success of these devices in analyzing real world samples is dependent on the ability of the on-board identification algorithm to de-convolve spectra of substances that are mixtures. To develop effective de-convolution algorithms, it is critical to characterize the relationship between the spectral features of a substance and its probability of detection within a mixture, as these features may be similar to or overlap with other substances in the mixture and in the library. While it has been recognized that these aspects pose challenges to mixture analysis, a systematic effort to quantify spectral characteristics and their impact, is generally lacking. In this paper, we propose metrics that can be used to quantify these spectral features. Some of these metrics, such as a modification of variance inflation factor, are derived from classical statistical measures used in regression diagnostics. We demonstrate that these metrics can be correlated to the accuracy of the substance's identification in a mixture. We also develop a framework for characterizing mixture analysis algorithms, using these metrics. Experimental results are then provided to show the application of this framework to the evaluation of various algorithms, including one that has been developed for a commercial device. The illustration is based on synthetic mixtures that are created from pure component Raman spectra measured on a portable device.

  4. Analysis of cancer-related fatigue based on smart bracelet devices.

    PubMed

    Shen, Hong; Hou, Honglun; Tian, Wei; Wu, MingHui; Chen, Tianzhou; Zhong, Xian

    2016-01-01

    Fatigue is the most common symptom associated with cancer and its treatment, and profoundly affects all aspects of quality of life for cancer patients. It is very important to measure and manage cancer-related fatigue. Usually, the cancer-related fatigue scores, which estimate the degree of fatigue, are self-reported by cancer patients using standardized assessment tools. But most of the classical methods used for measurement of fatigue are subjective and inconvenient. In this study, we try to establish a new method to assess cancer-related fatigue objectively and accurately by using smart bracelet. All patients with metastatic pancreatic cancer wore smart bracelet for recording the physical activity including step count and sleep time before and after chemotherapy. Meantime, their psychological state was assessed by completing questionnaire tables as cancer-related fatigue scores. Step count record by smart bracelet reflecting the physical performance dramatically decreased in the initial days of chemotherapy and recovered in the next few days. Statistical analysis showed a strong and significant correlation between self-reported cancer-related fatigue and physical performance (P= 0.000, r=-0.929). Sleep time was also significantly correlated with fatigue (P= 0.000, r= 0.723). Multiple regression analysis showed that physical performance and sleep time are significant predictors of fatigue. Measuring activity using smart bracelets may be an appropriate method for quantitative and objective measurement of cancer-related fatigue by using smart bracelet devices.

  5. Stringent or nonstringent complete remission and prognosis in acute myeloid leukemia: a Danish population-based study

    PubMed Central

    Øvlisen, Andreas K.; Oest, Anders; Bendtsen, Mette D.; Bæch, John; Johansen, Preben; Lynggaard, Line S.; Mølle, Ingolf; Mortensen, Thomas B.; Weber, Duruta; Ertner, Gideon; Schöllkopf, Claudia; Thomassen, Jesper Q.; Nielsen, Ove Juul; Østgård, Lene Sofie Granfeldt; Bøgsted, Martin; Dybkær, Karen; Johnsen, Hans E.

    2018-01-01

    Stringent complete remission (sCR) of acute myeloid leukemia is defined as normal hematopoiesis after therapy. Less sCR, including non-sCR, was introduced as insufficient blood platelet, neutrophil, or erythrocyte recovery. These latter characteristics were defined retrospectively as postremission transfusion dependency and were suggested to be of prognostic value. In the present report, we evaluated the prognostic impact of achieving sCR and non-sCR in the Danish National Acute Leukaemia Registry, including 769 patients registered with classical CR (ie, <5% blasts in the postinduction bone marrow analysis). Individual patients were classified as having sCR (n = 360; 46.8%) or non-sCR (n = 409; 53.2%) based on data from our national laboratory and transfusion databases. Survival analysis revealed that patients achieving sCR had superior overall survival (hazard ratio [HR], 1.34; 95% confidence interval [CI], 1.10-1.64) as well as relapse-free survival (HR, 1.25; 95% CI, 1.03-1.51) compared with those with non-sCR after adjusting for covariates. Cox regression analysis regarding the impact of the stringent criteria for blood cell recovery identified these as significant and independent variables. In conclusion, this real-life register study supports the international criteria for response evaluation on prognosis and, most importantly, documents each of the 3 lineage recovery criteria as contributing independently. PMID:29523528

  6. How Settings Change People: Applying Behavior Setting Theory to Consumer-Run Organizations

    ERIC Educational Resources Information Center

    Brown, Louis D.; Shepherd, Matthew D.; Wituk, Scott A.; Meissen, Greg

    2007-01-01

    Self-help initiatives stand as a classic context for organizational studies in community psychology. Behavior setting theory stands as a classic conception of organizations and the environment. This study explores both, applying behavior setting theory to consumer-run organizations (CROs). Analysis of multiple data sets from all CROs in Kansas…

  7. Uniting the Spheres: Modern Feminist Theory and Classic Texts in AP English

    ERIC Educational Resources Information Center

    Drew, Simao J. A.; Bosnic, Brenda G.

    2008-01-01

    High school teachers Simao J. A. Drew and Brenda G. Bosnic help familiarize students with gender role analysis and feminist theory. Students examine classic literature and contemporary texts, considering characters' historical, literary, and social contexts while expanding their understanding of how patterns of identity and gender norms exist and…

  8. Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms

    ERIC Educational Resources Information Center

    Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria

    2012-01-01

    A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…

  9. A phylogenetic Kalman filter for ancestral trait reconstruction using molecular data.

    PubMed

    Lartillot, Nicolas

    2014-02-15

    Correlation between life history or ecological traits and genomic features such as nucleotide or amino acid composition can be used for reconstructing the evolutionary history of the traits of interest along phylogenies. Thus far, however, such ancestral reconstructions have been done using simple linear regression approaches that do not account for phylogenetic inertia. These reconstructions could instead be seen as a genuine comparative regression problem, such as formalized by classical generalized least-square comparative methods, in which the trait of interest and the molecular predictor are represented as correlated Brownian characters coevolving along the phylogeny. Here, a Bayesian sampler is introduced, representing an alternative and more efficient algorithmic solution to this comparative regression problem, compared with currently existing generalized least-square approaches. Technically, ancestral trait reconstruction based on a molecular predictor is shown to be formally equivalent to a phylogenetic Kalman filter problem, for which backward and forward recursions are developed and implemented in the context of a Markov chain Monte Carlo sampler. The comparative regression method results in more accurate reconstructions and a more faithful representation of uncertainty, compared with simple linear regression. Application to the reconstruction of the evolution of optimal growth temperature in Archaea, using GC composition in ribosomal RNA stems and amino acid composition of a sample of protein-coding genes, confirms previous findings, in particular, pointing to a hyperthermophilic ancestor for the kingdom. The program is freely available at www.phylobayes.org.

  10. Symmetry-Based Techniques for Qualitative Understanding of Rovibrational Effects in Spherical-Top Molecular Spectra and Dynamics

    NASA Astrophysics Data System (ADS)

    Mitchell, Justin Chadwick

    2011-12-01

    Using light to probe the structure of matter is as natural as opening our eyes. Modern physics and chemistry have turned this art into a rich science, measuring the delicate interactions possible at the molecular level. Perhaps the most commonly used tool in computational spectroscopy is that of matrix diagonalization. While this is invaluable for calculating everything from molecular structure and energy levels to dipole moments and dynamics, the process of numerical diagonalization is an opaque one. This work applies symmetry and semi-classical techniques to elucidate numerical spectral analysis for high-symmetry molecules. Semi-classical techniques, such as the Potential Energy Surfaces, have long been used to help understand molecular vibronic and rovibronic spectra and dynamics. This investigation focuses on newer semi-classical techniques that apply Rotational Energy Surfaces (RES) to rotational energy level clustering effects in high-symmetry molecules. Such clusters exist in rigid rotor molecules as well as deformable spherical tops. This study begins by using the simplicity of rigid symmetric top molecules to clarify the classical-quantum correspondence of RES semi-classical analysis and then extends it to a more precise and complete theory of modern high-resolution spectra. RES analysis is extended to molecules having more complex and higher rank tensorial rotational and rovibrational Hamiltonians than were possible to understand before. Such molecules are shown to produce an extraordinary range of rotational level clusters, corresponding to a panoply of symmetries ranging from C4v to C2 and C1 (no symmetry) with a corresponding range of new angular momentum localization and J-tunneling effects. Using RES topography analysis and the commutation duality relations between symmetry group operators in the lab-frame to those in the body-frame, it is shown how to better describe and catalog complex splittings found in rotational level clusters. Symmetry character analysis is generalized to give analytic eigensolutions. An appendix provides vibrational analogies. For the first time, interactions between molecular vibrations (polyads) are described semi-classically by multiple RES. This is done for the nu 3/2nu4 dyad of CF4. The nine-surface RES topology of the U(9)-dyad agrees with both computational and experimental work. A connection between this and a simpler U(2) example is detailed in an Appendix.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E., E-mail: bunakov@VB13190.spb.edu

    A critical analysis of the present-day concept of chaos in quantum systems as nothing but a “quantum signature” of chaos in classical mechanics is given. In contrast to the existing semi-intuitive guesses, a definition of classical and quantum chaos is proposed on the basis of the Liouville–Arnold theorem: a quantum chaotic system featuring N degrees of freedom should have M < N independent first integrals of motion (good quantum numbers) specified by the symmetry of the Hamiltonian of the system. Quantitative measures of quantum chaos that, in the classical limit, go over to the Lyapunov exponent and the classical stabilitymore » parameter are proposed. The proposed criteria of quantum chaos are applied to solving standard problems of modern dynamical chaos theory.« less

  12. Decoherence can relax cosmic acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markkanen, Tommi

    In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decaymore » of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.« less

  13. Probing students’ conceptions at the classical-quantum interface

    NASA Astrophysics Data System (ADS)

    Chhabra, Mahima; Das, Ritwick

    2018-03-01

    Quantum mechanics (QM) is one of the core subject areas in the undergraduate physics curriculum and many of the advanced level physics courses involve direct or indirect application of the concepts and ideas taught in QM. On the other hand, proper understanding of QM interpretations requires an optimum level of understanding of fundamental concepts in classical physics such as energy, momentum, force and their role in determining motion of the particle. This study is an attempt to explore a group of undergraduate students’ mental models regarding fundamental concepts in classical physics which are actually the stepping stone for understanding and visualisation of QM. The data and analysis presented here elucidate the challenges students face to understand the classical ideas and how that affects their understanding of QM.

  14. Working conditions and psychotropic drug use: cross-sectional and prospective results from the French national SIP study.

    PubMed

    Lassalle, Marion; Chastang, Jean-François; Niedhammer, Isabelle

    2015-04-01

    Prospective studies exploring the associations between a large range of occupational factors and psychotropic drug use among national samples of workers are seldom. This study investigates the cross-sectional and prospective associations between occupational factors, including a large set of psychosocial work factors, and psychotropic drug use in the national French working population. The study sample comprised 7542 workers for the cross-sectional analysis and 4213 workers followed up for a 4-year period for the prospective analysis. Psychotropic drug use was measured within the last 12 months and defined by the use of antidepressants, anxiolytics or hypnotics. Three groups of occupational factors were explored: classical and emergent psychosocial work factors, working time/hours and physical work exposures. Weighted Poisson regression analyses were performed to adjust for covariates. In the cross-sectional analysis, psychological demands, low social support and hiding emotions were associated with psychotropic drug use. Job insecurity for men and night work for women were associated with psychotropic drug use. In the prospective analysis, hiding emotions and physical exposure were predictive of psychotropic drug use. Dose-response associations were observed for the frequency/intensity of exposure and repeated exposure to occupational factors. This study underlines the role of psychosocial work factors, including emergent factors, in psychotropic drug use. Prevention policies oriented toward psychosocial work factors comprehensively may be useful to reduce this use. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Maintenance Operations in Mission Oriented Protective Posture Level IV (MOPPIV)

    DTIC Science & Technology

    1987-10-01

    Repair FADAC Printed Circuit Board ............. 6 3. Data Analysis Techniques ............................. 6 a. Multiple Linear Regression... ANALYSIS /DISCUSSION ............................... 12 1. Exa-ple of Regression Analysis ..................... 12 S2. Regression results for all tasks...6 * TABLE 9. Task Grouping for Analysis ........................ 7 "TABXLE 10. Remove/Replace H60A3 Power Pack................. 8 TABLE

  16. Creep-Rupture Data Analysis - Engineering Application of Regression Techniques. Ph.D. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Rummler, D. R.

    1976-01-01

    The results are presented of investigations to apply regression techniques to the development of methodology for creep-rupture data analysis. Regression analysis techniques are applied to the explicit description of the creep behavior of materials for space shuttle thermal protection systems. A regression analysis technique is compared with five parametric methods for analyzing three simulated and twenty real data sets, and a computer program for the evaluation of creep-rupture data is presented.

  17. Resting-state functional magnetic resonance imaging: the impact of regression analysis.

    PubMed

    Yeh, Chia-Jung; Tseng, Yu-Sheng; Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi

    2015-01-01

    To investigate the impact of regression methods on resting-state functional magnetic resonance imaging (rsfMRI). During rsfMRI preprocessing, regression analysis is considered effective for reducing the interference of physiological noise on the signal time course. However, it is unclear whether the regression method benefits rsfMRI analysis. Twenty volunteers (10 men and 10 women; aged 23.4 ± 1.5 years) participated in the experiments. We used node analysis and functional connectivity mapping to assess the brain default mode network by using five combinations of regression methods. The results show that regressing the global mean plays a major role in the preprocessing steps. When a global regression method is applied, the values of functional connectivity are significantly lower (P ≤ .01) than those calculated without a global regression. This step increases inter-subject variation and produces anticorrelated brain areas. rsfMRI data processed using regression should be interpreted carefully. The significance of the anticorrelated brain areas produced by global signal removal is unclear. Copyright © 2014 by the American Society of Neuroimaging.

  18. Axion as a cold dark matter candidate: analysis to third order perturbation for classical axion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noh, Hyerim; Hwang, Jai-chan; Park, Chan-Gyung, E-mail: hr@kasi.re.kr, E-mail: jchan@knu.ac.kr, E-mail: park.chan.gyung@gmail.com

    2015-12-01

    We investigate aspects of axion as a coherently oscillating massive classical scalar field by analyzing third order perturbations in Einstein's gravity in the axion-comoving gauge. The axion fluid has its characteristic pressure term leading to an axion Jeans scale which is cosmologically negligible for a canonical axion mass. Our classically derived axion pressure term in Einstein's gravity is identical to the one derived in the non-relativistic quantum mechanical context in the literature. We present the general relativistic continuity and Euler equations for an axion fluid valid up to third order perturbation. Equations for axion are exactly the same as thatmore » of a zero-pressure fluid in Einstein's gravity except for an axion pressure term in the Euler equation. Our analysis includes the cosmological constant.« less

  19. Role of classic signs as diagnostic predictors for enteric fever among returned travellers: Relative bradycardia and eosinopenia

    PubMed Central

    Matono, Takashi; Kutsuna, Satoshi; Kato, Yasuyuki; Katanami, Yuichi; Yamamoto, Kei; Takeshita, Nozomi; Hayakawa, Kayoko; Kanagawa, Shuzo; Kaku, Mitsuo; Ohmagari, Norio

    2017-01-01

    Background The lack of characteristic clinical findings and accurate diagnostic tools has made the diagnosis of enteric fever difficult. We evaluated the classic signs of relative bradycardia and eosinopenia as diagnostic predictors for enteric fever among travellers who had returned from the tropics or subtropics. Methods This matched case-control study used data from 2006 to 2015 for culture-proven enteric fever patients as cases. Febrile patients (>38.3°C) with non-enteric fever, who had returned from the tropics or subtropics, were matched to the cases in a 1:3 ratio by age (±3 years), sex, and year of diagnosis as controls. Cunha’s criteria were used for relative bradycardia. Absolute eosinopenia was defined as an eosinophilic count of 0/μL. Results Data from 160 patients (40 cases and 120 controls) were analysed. Cases predominantly returned from South Asia (70% versus 18%, p <0.001). Relative bradycardia (88% versus 51%, p <0.001) and absolute eosinopenia (63% versus 38%, p = 0.008) were more frequent in cases than controls. In multivariate logistic regression analysis, return from South Asia (aOR: 21.6; 95% CI: 7.17–64.9) and relative bradycardia (aOR: 11.7; 95% CI: 3.21–42.5) were independent predictors for a diagnosis of enteric fever. The positive likelihood ratio was 4.00 (95% CI: 2.58–6.20) for return from South Asia, 1.72 (95% CI: 1.39–2.13) for relative bradycardia, and 1.63 (95%CI: 1.17–2.27) for absolute eosinopenia. The negative predictive values of the three variables were notably high (83–92%);. however, positive predictive values were 35–57%. Conclusions The classic signs of relative bradycardia and eosinopenia were not specific for enteric fever; however both met the criteria for being diagnostic predictors for enteric fever. Among febrile returned travellers, relative bradycardia and eosinopenia should be re-evaluated for predicting a diagnosis of enteric fever in non-endemic areas prior to obtaining blood cultures. PMID:28644847

  20. Standards for Standardized Logistic Regression Coefficients

    ERIC Educational Resources Information Center

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  1. Laparoscopic anterior versus endoscopic posterior approach for adrenalectomy: a shift to a new golden standard?

    PubMed

    Vrielink, O M; Wevers, K P; Kist, J W; Borel Rinkes, I H M; Hemmer, P H J; Vriens, M R; de Vries, J; Kruijff, S

    2017-08-01

    There has been an increased utilization of the posterior retroperitoneal approach (PRA) for adrenalectomy alongside the "classic" laparoscopic transabdominal technique (LTA). The aim of this study was to compare both procedures based on outcome variables at various ranges of tumor size. A retrospective analysis was performed on 204 laparoscopic transabdominal (UMC Groningen) and 57 retroperitoneal (UMC Utrecht) adrenalectomies between 1998 and 2013. We applied a univariate and multivariate regression analysis. Mann-Whitney and chi-squared tests were used to compare outcome variables between both approaches. Both mean operation time and median blood loss were significantly lower in the PRA group with 102.1 (SD 33.5) vs. 173.3 (SD 59.1) minutes (p < 0.001) and 0 (0-200) vs. 50 (0-1000) milliliters (p < 0.001), respectively. The shorter operation time in PRA was independent of tumor size. Complication rates were higher in the LTA (19.1%) compared to PRA (8.8%). There was no significant difference in recovery time between both approaches. Application of the PRA decreases operation time, blood loss, and complication rates compared to LTA. This might encourage institutions that use the LTA to start using PRA in patients with adrenal tumors, independent of tumor size.

  2. Mutual information and phase dependencies: measures of reduced nonlinear cardiorespiratory interactions after myocardial infarction.

    PubMed

    Hoyer, Dirk; Leder, Uwe; Hoyer, Heike; Pompe, Bernd; Sommer, Michael; Zwiener, Ulrich

    2002-01-01

    The heart rate variability (HRV) is related to several mechanisms of the complex autonomic functioning such as respiratory heart rate modulation and phase dependencies between heart beat cycles and breathing cycles. The underlying processes are basically nonlinear. In order to understand and quantitatively assess those physiological interactions an adequate coupling analysis is necessary. We hypothesized that nonlinear measures of HRV and cardiorespiratory interdependencies are superior to the standard HRV measures in classifying patients after acute myocardial infarction. We introduced mutual information measures which provide access to nonlinear interdependencies as counterpart to the classically linear correlation analysis. The nonlinear statistical autodependencies of HRV were quantified by auto mutual information, the respiratory heart rate modulation by cardiorespiratory cross mutual information, respectively. The phase interdependencies between heart beat cycles and breathing cycles were assessed basing on the histograms of the frequency ratios of the instantaneous heart beat and respiratory cycles. Furthermore, the relative duration of phase synchronized intervals was acquired. We investigated 39 patients after acute myocardial infarction versus 24 controls. The discrimination of these groups was improved by cardiorespiratory cross mutual information measures and phase interdependencies measures in comparison to the linear standard HRV measures. This result was statistically confirmed by means of logistic regression models of particular variable subsets and their receiver operating characteristics.

  3. Assessment and comparison of culturally based explanations for mental disorder among Singaporean Chinese youth.

    PubMed

    Mathews, Mathew

    2011-01-01

    Culture is important to how populations understand the cause of mental disorder, a variable that has implications for treatment-seeking behaviour. Asian populations underutilize professional mental health treatment partly because of their endorsement of supernatural causation models to explain mental disorders, beliefs that stem from their religious backgrounds. This study sought to understand the dimensions of explanatory models used by three groups of Singaporean Chinese youth (n = 842)--Christian, Chinese religionist, no religion--and examined their responses to an instrument that combined explanations from psychological and organic perspectives on mental disorder with approaches from Asian and Western religious traditions. Factor analysis revealed five factors. Two were psychological corresponding to the humanistic and cognitive-behavioural perspectives respectively. Another two, which were supernatural in nature, dealt with karmaic beliefs popular among Asian religionists and more classical religious explanations common in monotheistic religions. The remaining factor was deemed a physiological model although it incorporated an item that made it consistent with an Asian organic model. While groups differed in their endorsement of supernatural explanations, psychological perspectives had the strongest endorsement among this population. Regression analysis showed that individuals who endorsed supernatural explanations more strongly tended to have no exposure to psychology courses and heightened religiosity.

  4. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    PubMed

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  5. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  6. [A SAS marco program for batch processing of univariate Cox regression analysis for great database].

    PubMed

    Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin

    2015-02-01

    To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.

  7. Association of BRAFV600E Mutation and MicroRNA Expression with Central Lymph Node Metastases in Papillary Thyroid Cancer: A Prospective Study from Four Endocrine Surgery Centers

    PubMed Central

    Aragon Han, Patricia; Kim, Hyun-seok; Cho, Soonweng; Fazeli, Roghayeh; Najafian, Alireza; Khawaja, Hunain; McAlexander, Melissa; Dy, Benzon; Sorensen, Meredith; Aronova, Anna; Sebo, Thomas J.; Giordano, Thomas J.; Fahey, Thomas J.; Thompson, Geoffrey B.; Gauger, Paul G.; Somervell, Helina; Bishop, Justin A.; Eshleman, James R.; Schneider, Eric B.; Witwer, Kenneth W.; Umbricht, Christopher B.

    2016-01-01

    Background: Studies have demonstrated an association of the BRAFV600E mutation and microRNA (miR) expression with aggressive clinicopathologic features in papillary thyroid cancer (PTC). Analysis of BRAFV600E mutations with miR expression data may improve perioperative decision making for patients with PTC, specifically in identifying patients harboring central lymph node metastases (CLNM). Methods: Between January 2012 and June 2013, 237 consecutive patients underwent total thyroidectomy and prophylactic central lymph node dissection (CLND) at four endocrine surgery centers. All tumors were tested for the presence of the BRAFV600E mutation and miR-21, miR-146b-3p, miR-146b-5p, miR-204, miR-221, miR-222, and miR-375 expression. Bivariate and multivariable analyses were performed to examine associations between molecular markers and aggressive clinicopathologic features of PTC. Results: Multivariable logistic regression analysis of all clinicopathologic features found miR-146b-3p and miR-146b-5p to be independent predictors of CLNM, while the presence of BRAFV600E almost reached significance. Multivariable logistic regression analysis limited to only predictors available preoperatively (molecular markers, age, sex, and tumor size) found miR-146b-3p, miR-146b-5p, miR-222, and BRAFV600E mutation to predict CLNM independently. While BRAFV600E was found to be associated with CLNM (48% mutated in node-positive cases vs. 28% mutated in node-negative cases), its positive and negative predictive values (48% and 72%, respectively) limit its clinical utility as a stand-alone marker. In the subgroup analysis focusing on only classical variant of PTC cases (CVPTC), undergoing prophylactic lymph node dissection, multivariable logistic regression analysis found only miR-146b-5p and miR-222 to be independent predictors of CLNM, while BRAFV600E was not significantly associated with CLNM. Conclusion: In the patients undergoing prophylactic CLNDs, miR-146b-3p, miR-146b-5p, and miR-222 were found to be predictive of CLNM preoperatively. However, there was significant overlap in expression of these miRs in the two outcome groups. The BRAFV600E mutation, while being a marker of CLNM when considering only preoperative variables among all histological subtypes, is likely not a useful stand-alone marker clinically because the difference between node-positive and node-negative cases was small. Furthermore, it lost significance when examining only CVPTC. Overall, our results speak to the concept and interpretation of statistical significance versus actual applicability of molecular markers, raising questions about their clinical usefulness as individual prognostic markers. PMID:26950846

  8. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  9. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  10. The Impact of Protein Structure and Sequence Similarity on the Accuracy of Machine-Learning Scoring Functions for Binding Affinity Prediction

    PubMed Central

    Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.

    2018-01-01

    It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331

  11. The Impact of Protein Structure and Sequence Similarity on the Accuracy of Machine-Learning Scoring Functions for Binding Affinity Prediction.

    PubMed

    Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J

    2018-03-14

    It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.

  12. Kinetic and Mechanistic Studies of the Deuterium Exchange in Classical Keto-Enol Tautomeric Equilibrium Reactions

    ERIC Educational Resources Information Center

    Nichols, Michael A.; Waner, Mark J.

    2010-01-01

    An extension of the classic keto-enol tautomerization of beta-dicarbonyl compounds into a kinetic analysis of deuterium exchange is presented. It is shown that acetylacetone and ethyl acetoacetate undergo nearly complete deuterium exchange of the alpha-methylene carbon when dissolved in methanol-d[subscript 4]. The extent of deuteration may be…

  13. Classical, Generalizability, and Multifaceted Rasch Detection of Interrater Variability in Large, Sparse Data Sets.

    ERIC Educational Resources Information Center

    MacMillan, Peter D.

    2000-01-01

    Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…

  14. Alteration of a second putative fusion peptide of structural glycoprotein E2 of Classical Swine Fever Virus alters virus replication and virulence in swine

    USDA-ARS?s Scientific Manuscript database

    E2, the major envelope glycoprotein of Classical Swine Fever Virus (CSFV), is involved in several critical virus functions including cell attachment, host range susceptibility, and virulence in natural hosts. Functional structural analysis of E2 based on Wimley-White interfacial hydrophobicity dis...

  15. A Comparative Analysis of the Integration of Faith and Learning between ACSI and ACCS Accredited Schools

    ERIC Educational Resources Information Center

    Peterson, Daniel Carl

    2012-01-01

    The purpose of this descriptive quantitative study was to analyze and compare the integration of faith and learning occurring in Christian schools accredited by the Association of Christian Schools International (ACSI) and classical Christian schools accredited by the Association of Classical and Christian Schools (ACCS). ACSI represents the…

  16. An Introduction to Differentials Based on Hyperreal Numbers and Infinite Microscopes

    ERIC Educational Resources Information Center

    Henry, Valerie

    2010-01-01

    In this article, we propose to introduce the differential of a function through a non-classical way, lying on hyperreals and infinite microscopes. This approach is based on the developments of nonstandard analysis, wants to be more intuitive than the classical one and tries to emphasize the functional and geometric aspects of the differential. In…

  17. A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model

    ERIC Educational Resources Information Center

    Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.

    2007-01-01

    In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…

  18. Dialogues in Performance: A Team-Taught Course on the Afterlife in the Classical and Italian Traditions

    ERIC Educational Resources Information Center

    Gosetti-Murrayjohn, Angela; Schneider, Federico

    2009-01-01

    This article provides a reflection on a team-teaching experience in which performative dialogues between co-instructors and among students provided a pedagogical framework within which comparative analysis of textual traditions within the classical tradition could be optimized. Performative dialogues thus provided a model for and enactment of…

  19. Development of quantitative structure-activity relationships and its application in rational drug design.

    PubMed

    Yang, Guang-Fu; Huang, Xiaoqin

    2006-01-01

    Over forty years have elapsed since Hansch and Fujita published their pioneering work of quantitative structure-activity relationships (QSAR). Following the introduction of Comparative Molecular Field Analysis (CoMFA) by Cramer in 1998, other three-dimensional QSAR methods have been developed. Currently, combination of classical QSAR and other computational techniques at three-dimensional level is of greatest interest and generally used in the process of modern drug discovery and design. During the last several decades, a number of different mythologies incorporating a range of molecular descriptors and different statistical regression ways have been proposed and successfully applied in developing of new drugs, thus QSAR method has been proven to be indispensable in not only the reliable prediction of specific properties of new compounds, but also the help to elucidate the possible molecular mechanism of the receptor-ligand interactions. Here, we review the recent developments in QSAR and their applications in rational drug design, focusing on the reasonable selection of novel molecular descriptors and the construction of predictive QSAR models by the help of advanced computational techniques.

  20. GDF15(MIC1) H6D Polymorphism Does Not Influence Cardiovascular Disease in a Latin American Population with Rheumatoid Arthritis

    PubMed Central

    Amaya-Amaya, Jenny; Rojas-Villarraga, Adriana; Molano-Gonzalez, Nicolas; Montoya-Sánchez, Laura; Nath, Swapan K.; Anaya, Juan-Manuel

    2015-01-01

    Objective. Rheumatoid arthritis (RA) is the most common autoimmune arthropathy worldwide. The increased prevalence of cardiovascular disease (CVD) in RA is not fully explained by classic risk factors. The aim of this study was to determine the influence of rs1058587 SNP within GDF15(MIC1) gene on the risk of CVD in a Colombian RA population. Methods. This was a cross-sectional analytical study in which 310 consecutive Colombian patients with RA and 228 age- and sex-matched controls were included and assessed for variables associated with CVD. The mixed cluster methodology based on multivariate descriptive methods such as principal components analysis and multiple correspondence analyses and regression tree (CART) predictive model were performed. Results. Of the 310 patients, 87.4% were women and CVD was reported in 69.5%. Significant differences concerning GDF15 polymorphism were not observed between patients and controls. Mean arterial pressure, current smoking, and some clusters were significantly associated with CVD. Conclusion. GDF15 (rs1058587) does not influence the development of CVD in the population studied. PMID:26090487

  1. HPLC and chemometrics-assisted UV-spectroscopy methods for the simultaneous determination of ambroxol and doxycycline in capsule

    NASA Astrophysics Data System (ADS)

    Hadad, Ghada M.; El-Gindy, Alaa; Mahmoud, Waleed M. M.

    2008-08-01

    High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C 18 analytical column with a mobile phase consisting of a mixture of 20 mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ( 1DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.

  2. Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks.

    PubMed

    Lai, Jinxing; Qiu, Junling; Feng, Zhihua; Chen, Jianxun; Fan, Haobo

    2016-01-01

    In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability.

  3. Simultaneous determination of Nifuroxazide and Drotaverine hydrochloride in pharmaceutical preparations by bivariate and multivariate spectral analysis

    NASA Astrophysics Data System (ADS)

    Metwally, Fadia H.

    2008-02-01

    The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.

  4. HPLC and chemometrics-assisted UV-spectroscopy methods for the simultaneous determination of ambroxol and doxycycline in capsule.

    PubMed

    Hadad, Ghada M; El-Gindy, Alaa; Mahmoud, Waleed M M

    2008-08-01

    High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C(18) analytical column with a mobile phase consisting of a mixture of 20mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ((1)DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.

  5. Prediction of Soil Deformation in Tunnelling Using Artificial Neural Networks

    PubMed Central

    Lai, Jinxing

    2016-01-01

    In the past few decades, as a new tool for analysis of the tough geotechnical problems, artificial neural networks (ANNs) have been successfully applied to address a number of engineering problems, including deformation due to tunnelling in various types of rock mass. Unlike the classical regression methods in which a certain form for the approximation function must be presumed, ANNs do not require the complex constitutive models. Additionally, it is traced that the ANN prediction system is one of the most effective ways to predict the rock mass deformation. Furthermore, it could be envisaged that ANNs would be more feasible for the dynamic prediction of displacements in tunnelling in the future, especially if ANN models are combined with other research methods. In this paper, we summarized the state-of-the-art and future research challenges of ANNs on the tunnel deformation prediction. And the application cases as well as the improvement of ANN models were also presented. The presented ANN models can serve as a benchmark for effective prediction of the tunnel deformation with characters of nonlinearity, high parallelism, fault tolerance, learning, and generalization capability. PMID:26819587

  6. The effects of diseases, drugs, and chemicals on the creativity and productivity of famous sculptors, classic painters, classic music composers, and authors.

    PubMed

    Wolf, Paul L

    2005-11-01

    Many myths, theories, and speculations exist as to the exact etiology of the diseases, drugs, and chemicals that affected the creativity and productivity of famous sculptors, classic painters, classic music composers, and authors. To emphasize the importance of a modern clinical chemistry laboratory and hematology coagulation laboratory in interpreting the basis for the creativity and productivity of various artists. This investigation analyzed the lives of famous artists, including classical sculptor Benvenuto Cellini; classical sculptor and painter Michelangelo Buonarroti; classic painters Ivar Arosenius, Edvard Munch, and Vincent Van Gogh; classic music composer Louis Hector Berlioz; and English essayist Thomas De Quincey. The analysis includes their illnesses, their famous artistic works, and the modern clinical chemistry, toxicology, and hematology coagulation tests that would have been important in the diagnosis and treatment of their diseases. The associations between illness and art may be close and many because of both the actual physical limitations of the artists and their mental adaptation to disease. Although they were ill, many continued to be productive. If modern clinical chemistry, toxicology, and hematology coagulation laboratories had existed during the lifetimes of these various well-known individuals, clinical laboratories might have unraveled the mysteries of their afflictions. The illnesses these people endured probably could have been ascertained and perhaps treated. Diseases, drugs, and chemicals may have influenced their creativity and productivity.

  7. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  8. Confidence in Altman-Bland plots: a critical review of the method of differences.

    PubMed

    Ludbrook, John

    2010-02-01

    1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.

  9. Bayesian Exploratory Factor Analysis

    PubMed Central

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  10. Vibration of middle ear with shape memory prosthesis - Experimental and numerical study

    NASA Astrophysics Data System (ADS)

    Rafal, Rusinek; Szymanski, Marcin; Lajmert, Pawel

    2018-01-01

    The paper presents experimental investigations of ossicular chain vibrations using a Laser Doppler Vibrometer (LDV) for the intact middle ear and a reconstructed one by means of the new designed shape memory prosthesis. Vibrations of the round window are measured with the Laser Doppler vibrometer and studied classically by the transfer function analysis. Moreover, the recurrence plot technique and the Hilbert vibration decomposition method are used to extend the classical analysis. The new methods show additional vibrations components and provide more information about middle ear behaviour.

  11. Symbolic-numeric interface: A review

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1980-01-01

    A survey of the use of a combination of symbolic and numerical calculations is presented. Symbolic calculations primarily refer to the computer processing of procedures from classical algebra, analysis, and calculus. Numerical calculations refer to both numerical mathematics research and scientific computation. This survey is intended to point out a large number of problem areas where a cooperation of symbolic and numerical methods is likely to bear many fruits. These areas include such classical operations as differentiation and integration, such diverse activities as function approximations and qualitative analysis, and such contemporary topics as finite element calculations and computation complexity. It is contended that other less obvious topics such as the fast Fourier transform, linear algebra, nonlinear analysis and error analysis would also benefit from a synergistic approach.

  12. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  13. Regression Analysis and the Sociological Imagination

    ERIC Educational Resources Information Center

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  14. On the thermal efficiency of power cycles in finite time thermodynamics

    NASA Astrophysics Data System (ADS)

    Momeni, Farhang; Morad, Mohammad Reza; Mahmoudi, Ashkan

    2016-09-01

    The Carnot, Diesel, Otto, and Brayton power cycles are reconsidered endoreversibly in finite time thermodynamics (FTT). In particular, the thermal efficiency of these standard power cycles is compared to the well-known results in classical thermodynamics. The present analysis based on FTT modelling shows that a reduction in both the maximum and minimum temperatures of the cycle causes the thermal efficiency to increase. This is antithetical to the existing trend in the classical references. Under the assumption of endoreversibility, the relation between the efficiencies is also changed to {η }{{Carnot}}\\gt {η }{{Brayton}}\\gt {η }{{Diesel}}\\gt {η }{{Otto}}, which is again very different from the corresponding classical results. The present results benefit a better understanding of the important role of irreversibility on heat engines in classical thermodynamics.

  15. Classical and quantum localization and delocalization in the Fermi accelerator, kicked rotor and two-sided kicked rotor models

    NASA Astrophysics Data System (ADS)

    Zaslavsky, M.

    1996-06-01

    The phenomena of dynamical localization, both classical and quantum, are studied in the Fermi accelerator model. The model consists of two vertical oscillating walls and a ball bouncing between them. The classical localization boundary is calculated in the case of ``sinusoidal velocity transfer'' [A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, Berlin, 1983)] on the basis of the analysis of resonances. In the case of the ``sawtooth'' wall velocity we show that the quantum localization is determined by the analytical properties of the canonical transformations to the action and angle coordinates of the unperturbed Hamiltonian, while the existence of the classical localization is determined by the number of continuous derivatives of the distance between the walls with respect to time.

  16. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  17. Quantum versus classical dynamics in the optical centrifuge

    NASA Astrophysics Data System (ADS)

    Armon, Tsafrir; Friedland, Lazar

    2017-09-01

    The interplay between classical and quantum-mechanical evolution in the optical centrifuge (OC) is discussed. The analysis is based on the quantum-mechanical formalism starting from either the ground state or a thermal ensemble. Two resonant mechanisms are identified, i.e., the classical autoresonance and the quantum-mechanical ladder climbing, yielding different dynamics and rotational excitation efficiencies. The rotating-wave approximation is used to analyze the two resonant regimes in the associated dimensionless two-parameter space and calculate the OC excitation efficiency. The results show good agreement between numerical simulations and theory and are relevant to existing experimental setups.

  18. Tug-of-war between classical and multicenter bonds in H-(Be)n-H species

    NASA Astrophysics Data System (ADS)

    Lundell, Katie A.; Boldyrev, Alexander I.

    2018-05-01

    Quantum chemical calculations were performed for beryllium homocatenated compounds [H-(Be)n-H]. Global minimum structures were found using machine searches (Coalescence Kick method) with density functional theory. Chemical bonding analysis was performed with the Adaptive Natural Density Partitioning method. It was found that H-(Be)2-H and H-(Be)3-H clusters are linear with classical two-center two-electron bonds, while for n > 3, three-dimensional structures are more stable with multicenter bonding. Thus, at n = 4, multicenter bonding wins the tug-of-war vs. the classical bonding.

  19. Can the "musical stroop" task replace the classical stroop task? Commentary on “The musical Stroop effect: Opening a new avenue to research on automatisms” by l. Grégoire, P. Perruchet, and B. Poulin-Charronnat (Experimental Psychology, 2013, vol. 60, pp. 269–278).

    PubMed

    Zakay, Dan

    2014-01-01

    The musical Stroop task is analyzed and compared to the classical Stroop task. The analysis indicates that the two tasks differ in the following significant characteristics: ecological validity, the interrelations between the two perceptual dimensions involved, the nature of the automatic process and the existence of a potential Garner interference. It is concluded that the musical task has no advantage over the classical task.

  20. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.

    PubMed

    Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.

  1. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    PubMed Central

    Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363

  2. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  3. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  4. A Machine Learning Framework for Plan Payment Risk Adjustment.

    PubMed

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  5. Predicting the occurrence of wildfires with binary structured additive regression models.

    PubMed

    Ríos-Pena, Laura; Kneib, Thomas; Cadarso-Suárez, Carmen; Marey-Pérez, Manuel

    2017-02-01

    Wildfires are one of the main environmental problems facing societies today, and in the case of Galicia (north-west Spain), they are the main cause of forest destruction. This paper used binary structured additive regression (STAR) for modelling the occurrence of wildfires in Galicia. Binary STAR models are a recent contribution to the classical logistic regression and binary generalized additive models. Their main advantage lies in their flexibility for modelling non-linear effects, while simultaneously incorporating spatial and temporal variables directly, thereby making it possible to reveal possible relationships among the variables considered. The results showed that the occurrence of wildfires depends on many covariates which display variable behaviour across space and time, and which largely determine the likelihood of ignition of a fire. The joint possibility of working on spatial scales with a resolution of 1 × 1 km cells and mapping predictions in a colour range makes STAR models a useful tool for plotting and predicting wildfire occurrence. Lastly, it will facilitate the development of fire behaviour models, which can be invaluable when it comes to drawing up fire-prevention and firefighting plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The Modification of Diet in Renal Disease 4-calculated glomerular filtration rate is a better prognostic factor of cardiovascular events than classical cardiovascular risk factors in patients with peripheral arterial disease.

    PubMed

    Romero, Jose-María; Bover, Jordi; Fite, Joan; Bellmunt, Sergi; Dilmé, Jaime-Félix; Camacho, Mercedes; Vila, Luis; Escudero, Jose-Román

    2012-11-01

    Risk prediction is important in medical management, especially to optimize patient management before surgical intervention. No quantitative risk scores or predictors are available for patients with peripheral arterial disease (PAD). Surgical risk and prognosis are usually based on anesthetic scores or clinical evaluation. We suggest that renal function is a better predictor of risk than other cardiovascular parameters. This study used the four-variable Modification of Diet in Renal Disease (MDRD-4)-calculated glomerular filtration rate (GFR) to compare classical cardiovascular risk factors with prognosis and cardiovascular events of hospitalized PAD patients. The study evaluated 204 patients who were admitted for vascular intervention and diagnosed with grade IIb, III, or IV PAD or with carotid or renal stenosis. Those with carotid or renal stenosis were excluded, leaving 188 patients who were randomized from 2004 to 2005 and monitored until 2010. We performed a life-table analysis with a 6-year follow-up period and one final checkpoint. The following risk factors were evaluated: age, sex, ischemic heart disease, ictus (as a manifestation of cerebrovascular disease related to systemic arterial disease), diabetes, arterial hypertension, dyslipidemia, smoking, chronic obstructive pulmonary disease, type of vascular intervention, and urea and creatinine plasma levels. The GFR was calculated using the MDRD-4 equation. Death, major cardiovascular events, and reintervention for arterial disease were recorded during the follow-up. Patients (73% men) were a mean age of 71.38 ± 11.43 (standard deviation) years. PAD grade IIb was diagnosed in 41 (20%) and grade III-IV in 147 (72%). Forty-two minor amputations (20.6%), 21 major amputations (10.3%), and 102 revascularizations (50%) were performed. A major cardiovascular event occurred in 60 patients (29.4%), and 71 (34.8%) died. Multivariate logistic regression analysis showed that the MDRD-4 GFR, age, and male sex were independent variables related to death and that the MDRD-4 GFR and chronic obstructive pulmonary disease were related to major cardiovascular events. A statistically significant relationship was also found between serum creatinine levels and reintervention rates. The MDRD-4 GFR was a better predictor of risk of death or infarction than classical cardiovascular risk factors in patients with PAD. This suggests that its routine use in the initial evaluation in patients with PAD is beneficial. Copyright © 2012 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  7. Modeling daily soil temperature over diverse climate conditions in Iran—a comparison of multiple linear regression and support vector regression techniques

    NASA Astrophysics Data System (ADS)

    Delbari, Masoomeh; Sharifazari, Salman; Mohammadi, Ehsan

    2018-02-01

    The knowledge of soil temperature at different depths is important for agricultural industry and for understanding climate change. The aim of this study is to evaluate the performance of a support vector regression (SVR)-based model in estimating daily soil temperature at 10, 30 and 100 cm depth at different climate conditions over Iran. The obtained results were compared to those obtained from a more classical multiple linear regression (MLR) model. The correlation sensitivity for the input combinations and periodicity effect were also investigated. Climatic data used as inputs to the models were minimum and maximum air temperature, solar radiation, relative humidity, dew point, and the atmospheric pressure (reduced to see level), collected from five synoptic stations Kerman, Ahvaz, Tabriz, Saghez, and Rasht located respectively in the hyper-arid, arid, semi-arid, Mediterranean, and hyper-humid climate conditions. According to the results, the performance of both MLR and SVR models was quite well at surface layer, i.e., 10-cm depth. However, SVR performed better than MLR in estimating soil temperature at deeper layers especially 100 cm depth. Moreover, both models performed better in humid climate condition than arid and hyper-arid areas. Further, adding a periodicity component into the modeling process considerably improved the models' performance especially in the case of SVR.

  8. Automation of Classical QEEG Trending Methods for Early Detection of Delayed Cerebral Ischemia: More Work to Do.

    PubMed

    Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon

    2016-06-01

    The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.

  9. The course of awake breathing disturbances across the lifespan in Rett syndrome.

    PubMed

    Tarquinio, Daniel C; Hou, Wei; Neul, Jeffrey L; Berkmen, Gamze Kilic; Drummond, Jana; Aronoff, Elizabeth; Harris, Jennifer; Lane, Jane B; Kaufmann, Walter E; Motil, Kathleen J; Glaze, Daniel G; Skinner, Steven A; Percy, Alan K

    2018-04-12

    Rett syndrome (RTT), an X-linked dominant neurodevelopmental disorder caused by mutations in MECP2, is associated with a peculiar breathing disturbance exclusively during wakefulness that is distressing, and can even prompt emergency resuscitation. Through the RTT Natural History Study, we characterized cross sectional and longitudinal characteristics of awake breathing abnormalities in RTT and identified associated clinical features. Participants were recruited from 2006 to 2015, and cumulative lifetime prevalence of breathing dysfunction was determined using the Kaplan-Meier estimator. Risk factors were assessed using logistic regression. Of 1205 participants, 1185 had sufficient data for analysis, including 922 females with classic RTT, 778 of whom were followed longitudinally for up to 9.0 years, for a total of 3944 person-years. Participants with classic or atypical severe RTT were more likely to have breathing dysfunction (nearly 100% over the lifespan) compared to those with atypical mild RTT (60-70%). Remission was common, lasting 1 year on average, with 15% ending the study in terminal remission. Factors associated with higher odds of severe breathing dysfunction included poor gross and fine motor function, frequency of stereotypical hand movements, seizure frequency, prolonged corrected QT interval on EKG, and two quality of life metrics: caregiver concern about physical health and contracting illness. Factors associated with lower prevalence of severe breathing dysfunction included higher body mass index and head circumference Z-scores, advanced age, and severe scoliosis or contractures. Awake breathing dysfunction is common in RTT, more so than seizures, and is associated with function, quality of life and risk for cardiac dysrhythmia. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  10. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    PubMed

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Expectations and effectiveness of medical treatment and classical homeopathic treatment for patients with hypersensitivity illnesses--one year prospective study.

    PubMed

    Launsø, Laila; Henningsen, Inge; Rieper, Jonas; Brender, Henriette; Sandø, Finn; Hvenegaard, Anne

    2007-10-01

    To describe and compare characteristics of adult patients who received treatment for hypersensitivity illnesses by general practitioners (GPs) and classical homeopaths (CHs) over a period of 1 year and examine the statistical predictors of self-reported treatment outcomes. We conducted a survey on 151 Danish adult patients with hypersensitivity illnesses, who chose treatment from one of 13 GPs or one of 10 CHs who participated in the project. The treatments were given as individual packages in the naturalistic clinical setting. Patients completed questionnaires at start of treatment, after 6 months and a year after start of treatment. Response rates for the first, second and third questionnaire were respectively 68%, 98%, 95% for the GP patients and 82%, 98%, 94% for the CH patients. Patients seeking CH treatment in this study are significantly different in gender and education from patients seeking GP treatment. We did not find significant differences in terms of occupational training, occupation, sickness absence due to hypersensitivity illnesses, diseases other than hypersensitivity illnesses, symptoms severity due to hypersensitivity illnesses before treatment and expectation of the ability of the treatment to alleviate symptoms. Eighty-eight percent of GP and 21% of CH patients were continuing treatment after 1 year. Regression analysis showed that the only significant independent variables to explain the probability of obtaining very positive effect or cure for GPs and CHs were that the patients were in 'maintenance treatment', and had high expectation before treatment of the ability of the treatment to relieve their symptoms. In this study self-reported very positive effect of GP treatment and very positive effect and cure of CH treatment are associated with the patients' high expectation of the treatment and continuation of maintenance treatment.

  12. Sex determination from the femur in Portuguese populations with classical and machine-learning classifiers.

    PubMed

    Curate, F; Umbelino, C; Perinha, A; Nogueira, C; Silva, A M; Cunha, E

    2017-11-01

    The assessment of sex is of paramount importance in the establishment of the biological profile of a skeletal individual. Femoral relevance for sex estimation is indisputable, particularly when other exceedingly dimorphic skeletal regions are missing. As such, this study intended to generate population-specific osteometric models for the estimation of sex with the femur and to compare the accuracy of the models obtained through classical and machine-learning classifiers. A set of 15 standard femoral measurements was acquired in a training sample (100 females; 100 males) from the Coimbra Identified Skeletal Collection (University of Coimbra, Portugal) and models for sex classification were produced with logistic regression (LR), linear discriminant analysis (LDA), support vector machines (SVM), and reduce error pruning trees (REPTree). Under cross-validation, univariable sectioning points generated with REPTree correctly estimated sex in 60.0-87.5% of cases (systematic error ranging from 0.0 to 37.0%), while multivariable models correctly classified sex in 84.0-92.5% of cases (bias from 0.0 to 7.0%). All models were assessed in a holdout sample (24 females; 34 males) from the 21st Century Identified Skeletal Collection (University of Coimbra, Portugal), with an allocation accuracy ranging from 56.9 to 86.2% (bias from 4.4 to 67.0%) in the univariable models, and from 84.5 to 89.7% (bias from 3.7 to 23.3%) in the multivariable models. This study makes available a detailed description of sexual dimorphism in femoral linear dimensions in two Portuguese identified skeletal samples, emphasizing the relevance of the femur for the estimation of sex in skeletal remains in diverse conditions of completeness and preservation. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Is CHA2DS2-VASc appropriate for hyperthyroid patients with atrial fibrillation? Implications of adding a transesophageal echocardiography evaluation.

    PubMed

    de Souza, Marcus Vinicius Leitão; de Fátima Dos Santos Teixeira, Patricia; Vaisman, Mario; Xavier, Sergio Salles

    2017-02-01

    Anticoagulation remains a controversial issue among hyperthyroid patients with atrial fibrillation (AF). We aimed to evaluate the prevalence of the thrombogenic milieu (TM), detected using transesophageal echocardiography (TEE), among patients with AF related to hyperthyroidism, and to correlate these findings with the clinical embolic risk classification (CHA 2 DS 2 -VASc). CHA 2 DS 2 -VASc score, thyroid hormonal status, time since hyperthyroidism diagnosis, transthoracic echocardiography (TTE) and TEE were assessed in 47 consecutive patients aged between 18 and 65years with AF related to hyperthyroidism. The following TEE parameters defined TM: dense spontaneous echo contrast, thrombi, or left atrial appendage (LAA) blood flow velocities <0.20m/s. Non-classic TM was defined as non-dense SEC plus LAA flow velocity 0.20-0.40m/s. Pulmonary hypertension was present in 39/47 (81.4%) and TM in 22/47 (46.8%) patients. Despite a low CHA 2 DS 2 -VASc score of 0/1, 10 of 19 (52.6%) patients had a TM, whereas 16 of 28 (57.1%) patients with score ≥2 had none. The probability of having a TM did not correlate with CHA 2 DS 2 -VASc scores. On regression binary analysis, hyperthyroidism diagnosed more than 12months previous was independently associated with non-classic TM (p=0.031). Among patients younger than 65years of age with AF related to hyperthyroidism, pulmonary hypertension and TM on TEE were highly prevalent. There was no association between CHA 2 DS 2 -VASc with TEE markers of TM. Thyroid status, especially longer duration of hyperthyroidism might influence thrombogenic abnormalities. TEE adds useful information that may change antithrombotic therapy if otherwise guided solely by clinical risk classification. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. A case of a Tunisian Rett patient with a novel double-mutation of the MECP2 gene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendri-Kriaa, Nourhene, E-mail: nourhene.fendri@gmail.com; Hsairi, Ines; Kifagi, Chamseddine

    2011-06-03

    Highlights: {yields} Sequencing of the MECP2 gene, modeling and comparison of the two variants were performed in a Tunisian classical Rett patient. {yields} A double-mutation: a new and de novo mutation c.535C > T and the common one c.763C > T of the MECP2 gene was identified. {yields} The P179S transition may change local electrostatic properties which may affect the function and stability of the protein MeCP2. -- Abstract: Rett syndrome is an X-linked dominant disorder caused frequently by mutations in the methyl-CpG-binding protein 2 gene (MECP2). Rett patients present an apparently normal psychomotor development during the first 6-18 monthsmore » of life. Thereafter, they show a short period of developmental stagnation followed by a rapid regression in language and motor development. The aim of this study was to perform a mutational analysis of the MECP2 gene in a classical Rett patient by sequencing the corresponding gene and modeling the found variants. The results showed the presence of a double-mutation: a new and de novo mutation c.535C > T (p.P179S) and the common c.763C > T (p.R255X) transition of the MECP2 gene. The p.P179S mutation was located in a conserved amino acid in CRIR domain (corepressor interacting region). Modeling results showed that the P179S transition could change local electrostatic properties by adding a negative charge due to serine hydroxyl group of this region of MeCP2 which may affect the function and stability of the protein. The p.R255X mutation is located in TRD-NLS domain (transcription repression domain-nuclear localization signal) of MeCP2 protein.« less

  15. Autoimmune and Atopic Disorders and Risk of Classical Hodgkin Lymphoma.

    PubMed

    Hollander, Peter; Rostgaard, Klaus; Smedby, Karin E; Chang, Ellen T; Amini, Rose-Marie; de Nully Brown, Peter; Glimelius, Bengt; Adami, Hans-Olov; Melbye, Mads; Glimelius, Ingrid; Hjalgrim, Henrik

    2015-10-01

    Results from previous investigations have shown associations between the risk of Hodgkin lymphoma (HL) and a history of autoimmune and atopic diseases, but it remains unknown whether these associations apply to all types of HL or only to specific subtypes. We investigated immune diseases and the risk of classical HL in a population-based case-control study that included 585 patients and 3,187 controls recruited from October 1999 through August 2002. We collected information on immune diseases through telephone interviews and performed serological analyses of specific immunoglobulin E reactivity. Tumor Epstein-Barr virus (EBV) status was determined for 498 patients. Odds ratios with 95% confidence intervals were calculated using logistic regression analysis. Rheumatoid arthritis was associated with a higher risk of HL (odds ratio (OR) = 2.63; 95% confidence interval (CI): 1.47, 4.70), especially EBV-positive HL (OR = 3.18; 95% CI: 1.23, 8.17), and with mixed-cellularity HL (OR = 4.25; 95% CI: 1.66, 10.90). HL risk was higher when we used proxies of severe rheumatoid arthritis, such as ever having received daily rheumatoid arthritis medication (OR = 3.98; 95% CI: 2.08, 7.62), rheumatoid arthritis duration of 6-20 years (OR = 3.80; 95% CI: 1.72, 8.41), or ever having been hospitalized for rheumatoid arthritis (OR = 7.36; 95% CI: 2.95, 18.38). Atopic diseases were not associated with the risk of HL. EBV replication induced by chronic inflammation in patients with autoimmune diseases might explain the higher risk of EBV-positive HL. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Genetic Variation in Complement Component 2 of the Classical Complement Pathway is Associated with Increased Mortality and Infection: A Study of 627 Trauma Patients

    PubMed Central

    Morris, John A.; Francois, Cedric; Olson, Paul K.; Cotton, Bryan A.; Summar, Marshall; Jenkins, Judith M.; Norris, Patrick R.; Moore, Jason H.; Williams, Anna E.; McNew, Brent S.; Canter, Jeffrey A.

    2009-01-01

    Trauma is a disease of inflammation. Complement Component 2 (C2) is a protease involved in activation of complement through the classical pathway and has been implicated in a variety of chronic inflammatory diseases. We hypothesized that genetic variation in C2 (E318D) identifies a high-risk subgroup of trauma patients reflecting increased mortality and infection (Ventilator associated pneumonia: VAP). Consequently, genetic variation in C2 may stratify patient risk and illuminate underlying mechanisms for therapeutic intervention. Methods DNA samples from 702 trauma patients were genotyped for C2 E318D and linked with covariates (age: mean 42.8 years, gender: 74% male, ethnicity: 80% Caucasian, mechanism: 84% blunt, ISS: mean 25.0, admission lactate: mean 3.13 mEq/L) and outcomes: mortality 9.9% and VAP: 18.5%. VAP was defined by quantitative bronchoalveolar lavage (>104). Multivariate regression determined the relationship of genotype and covariates to risk of death and VAP. However, patients with ISS ≥ 45 were excluded from the multivariate analysis, as magnitude of injury overwhelms genetics and covariates in determining outcome. Results 52 patients (8.3%) had the high-risk heterozygous genotype, associated with a significant increase in mortality and VAP. Conclusion In 702 trauma patients, 8.3% had a high-risk genetic variation in C2 associated with increased mortality (OR=2.65) and infection (OR=2.00). This variation: 1) Identifies a previously unknown high risk group for infection and mortality; 2) Can be determined on admission; 3) May provide opportunity for early therapeutic intervention; and 4) Requires validation in a distinct cohort of patients. PMID:19430225

  17. Multivariate Regression Analysis and Slaughter Livestock,

    DTIC Science & Technology

    AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY

  18. Comprehensive analysis of MHC class II genes in teleost fish genomes reveals dispensability of the peptide-loading DM system in a large part of vertebrates

    PubMed Central

    2013-01-01

    Background Classical major histocompatibility complex (MHC) class II molecules play an essential role in presenting peptide antigens to CD4+ T lymphocytes in the acquired immune system. The non-classical class II DM molecule, HLA-DM in the case of humans, possesses critical function in assisting the classical MHC class II molecules for proper peptide loading and is highly conserved in tetrapod species. Although the absence of DM-like genes in teleost fish has been speculated based on the results of homology searches, it has not been definitively clear whether the DM system is truly specific for tetrapods or not. To obtain a clear answer, we comprehensively searched class II genes in representative teleost fish genomes and analyzed those genes regarding the critical functional features required for the DM system. Results We discovered a novel ancient class II group (DE) in teleost fish and classified teleost fish class II genes into three major groups (DA, DB and DE). Based on several criteria, we investigated the classical/non-classical nature of various class II genes and showed that only one of three groups (DA) exhibits classical-type characteristics. Analyses of predicted class II molecules revealed that the critical tryptophan residue required for a classical class II molecule in the DM system could be found only in some non-classical but not in classical-type class II molecules of teleost fish. Conclusions Teleost fish, a major group of vertebrates, do not possess the DM system for the classical class II peptide-loading and this sophisticated system has specially evolved in the tetrapod lineage. PMID:24279922

  19. Structural aspects of the solvation shell of lysine and acetylated lysine: A Car-Parrinello and classical molecular dynamics investigation

    NASA Astrophysics Data System (ADS)

    Carnevale, V.; Raugei, S.

    2009-12-01

    Lysine acetylation is a post-translational modification, which modulates the affinity of protein-protein and/or protein-DNA complexes. Its crucial role as a switch in signaling pathways highlights the relevance of charged chemical groups in determining the interactions between water and biomolecules. A great effort has been recently devoted to assess the reliability of classical molecular dynamics simulations in describing the solvation properties of charged moieties. In the spirit of these investigations, we performed classical and Car-Parrinello molecular dynamics simulations on lysine and acetylated-lysine in aqueous solution. A comparative analysis between the two computational schemes is presented with a focus on the first solvation shell of the charged groups. An accurate structural analysis unveils subtle, yet statistically significant, differences which are discussed in connection to the significant electronic density charge transfer occurring between the solute and the surrounding water molecules.

  20. Quantum Transmission Conditions for Diffusive Transport in Graphene with Steep Potentials

    NASA Astrophysics Data System (ADS)

    Barletti, Luigi; Negulescu, Claudia

    2018-05-01

    We present a formal derivation of a drift-diffusion model for stationary electron transport in graphene, in presence of sharp potential profiles, such as barriers and steps. Assuming the electric potential to have steep variations within a strip of vanishing width on a macroscopic scale, such strip is viewed as a quantum interface that couples the classical regions at its left and right sides. In the two classical regions, where the potential is assumed to be smooth, electron and hole transport is described in terms of semiclassical kinetic equations. The diffusive limit of the kinetic model is derived by means of a Hilbert expansion and a boundary layer analysis, and consists of drift-diffusion equations in the classical regions, coupled by quantum diffusive transmission conditions through the interface. The boundary layer analysis leads to the discussion of a four-fold Milne (half-space, half-range) transport problem.

  1. A comparison of classical histology to anatomy revealed by hard x-rays

    NASA Astrophysics Data System (ADS)

    Richter, Claus-Peter; Tan, Xiaodong; Young, Hunter; Stock, Stuart; Robinson, Alan; Byskosh, Orest; Zheng, Jing; Soriano, Carmen; Xiao, Xianghui; Whitlon, Donna

    2016-10-01

    Many diseases trigger morphological changes in affected tissue. Today, classical histology is still the "gold standard" used to study and describe those changes. Classical histology, however, is time consuming and requires chemical tissue manipulations that can result in significant tissue distortions. It is sometimes difficult to separate tissue-processing artifacts from changes caused by the disease process. We show that synchrotron X-ray phase-contrast micro-computed tomography (micro-CT) can be used to examine non-embedded, hydrated tissue at a resolution comparable to that obtained with classical histology. The data analysis from stacks of reconstructed micro-CT images is more flexible and faster than when using the classical, physically embedded sections that are by necessity fixed in a particular orientation. We show that in a three-dimensional (3D) structure with meticulous structural details such as the cochlea and the kidney, micro-CT is more flexible, faster and more convenient for morphological studies and disease diagnoses.

  2. Classical analogs for Rabi-oscillations, Ramsey-fringes, and spin-echo in Josephson junctions

    NASA Astrophysics Data System (ADS)

    Marchese, J. E.; Cirillo, M.; Grønbech-Jensen, N.

    2007-08-01

    We investigate the results of recently published experiments on the quantum behavior of Josephson circuits in terms of the classical modeling based on the resistively and capacitively-shunted (RCSJ) junction model. Our analysis shows evidence for a close analogy between the nonlinear behavior of a pulsed microwave-driven Josephson junction at low temperature and low dissipation and the experimental observations reported for the Josephson circuits. Specifically, we demonstrate that Rabi-oscillations, Ramsey-fringes, and spin-echo observations are not phenomena with a unique quantum interpretation. In fact, they are natural consequences of transients to phase-locking in classical nonlinear dynamics and can be observed in a purely classical model of a Josephson junction when the experimental recipe for the application of microwaves is followed and the experimental detection scheme followed. We therefore conclude that classical nonlinear dynamics can contribute to the understanding of relevant experimental observations of Josephson response to various microwave perturbations at very low temperature and low dissipation.

  3. The effect of preference for three different types of music on magnitude estimation-scaling behavior in young adults.

    PubMed

    Fucci, D; Petrosino, L; Banks, M; Zaums, K; Wilcox, C

    1996-08-01

    The purpose of the present study was to assess the effect of preference for three different types of music on magnitude estimation scaling behavior in young adults. Three groups of college students, 10 who liked rock music, 10 who liked big band music, and 10 who liked classical music were tested. Subjects were instructed to assign numerical values to a random series of nine suprathreshold intensity levels of 10-sec, samples of rock music, big band music, and classical music. Analysis indicated that subjects who liked rock music scaled that stimulus differently from those subjects who liked big band and classical music. Subjects who liked big band music scaled that stimulus differently from those subjects who liked rock music and classical music. All subjects scaled classical music similarly regardless of their musical preferences. Results are discussed in reference to the literature concerned with personality and preference as well as spectrographic analyses of the three different types of music used in this study.

  4. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  5. RNA isolation from bloodstains collected on FTA cards - application in clinical and forensic genetics.

    PubMed

    Skonieczna, Katarzyna; Styczyński, Jan; Krenska, Anna; Wysocki, Mariusz; Jakubowska, Aneta; Grzybowski, Tomasz

    2016-01-01

    Aim of the study: In recent years, RNA analysis has been increasingly used in clinical and forensic genetics. Nevertheless, a major limitation of RNA-based applications is very low RNA stability in biological material, due to the RNAse activity. This highlights the need for improving the methods of RNA collection and storage. Technological approaches such as FTA Classic Cards (Whatman) could provide a solution for the problem of RNA degradation. However, different methods of RNA isolation from FTA cards could have diverse effects on RNA quantity and quality. The purpose of this research was to analyze the utility of three different methods of RNA isolation from peripheral blood collected on FTA Classic Cards (Whatman). The study also aimed at assessing RNA stability in bloodstains deposited on FTA cards. Material and methods: The study was performed on peripheral bloodstains collected from 59 individuals on FTA Classic Cards (Whatman). RNA was isolated with High Pure RNA Isolation Kit (Roche Diagnostics), Universal RNA/miRNA Purification (EURx) and TRIzol Reagent (Life Technologies). RNA was subjected to quantitative analysis followed by reverse transcription and Real - Time PCR reaction. Results: The study has shown that FTA Classic Cards (Whatman) are useful tools for storing bloodstains at room temperature for RNA analysis. Moreover, the method of RNA extraction employing TRIzol Reagent (Life Technologies) provides the highest efficiency and reproducibility for samples stored for no more than 2 years. Conclusions: The FTA cards are suitable for collecting and storing bloodstains for RNA analysis in clinical and forensic genetics.

  6. Objective Dysphonia Quantification in Vocal Fold Paralysis: Comparing Nonlinear with Classical Measures

    PubMed Central

    Little, Max A.; Costello, Declan A. E.; Harries, Meredydd L.

    2010-01-01

    Summary Clinical acoustic voice-recording analysis is usually performed using classical perturbation measures, including jitter, shimmer, and noise-to-harmonic ratios (NHRs). However, restrictive mathematical limitations of these measures prevent analysis for severely dysphonic voices. Previous studies of alternative nonlinear random measures addressed wide varieties of vocal pathologies. Here, we analyze a single vocal pathology cohort, testing the performance of these alternative measures alongside classical measures. We present voice analysis pre- and postoperatively in 17 patients with unilateral vocal fold paralysis (UVFP). The patients underwent standard medialization thyroplasty surgery, and the voices were analyzed using jitter, shimmer, NHR, nonlinear recurrence period density entropy (RPDE), detrended fluctuation analysis (DFA), and correlation dimension. In addition, we similarly analyzed 11 healthy controls. Systematizing the preanalysis editing of the recordings, we found that the novel measures were more stable and, hence, reliable than the classical measures on healthy controls. RPDE and jitter are sensitive to improvements pre- to postoperation. Shimmer, NHR, and DFA showed no significant change (P > 0.05). All measures detect statistically significant and clinically important differences between controls and patients, both treated and untreated (P < 0.001, area under curve [AUC] > 0.7). Pre- to postoperation grade, roughness, breathiness, asthenia, and strain (GRBAS) ratings show statistically significant and clinically important improvement in overall dysphonia grade (G) (AUC = 0.946, P < 0.001). Recalculating AUCs from other study data, we compare these results in terms of clinical importance. We conclude that, when preanalysis editing is systematized, nonlinear random measures may be useful for monitoring UVFP-treatment effectiveness, and there may be applications to other forms of dysphonia. PMID:19900790

  7. Regression Analysis: Legal Applications in Institutional Research

    ERIC Educational Resources Information Center

    Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.

    2008-01-01

    This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…

  8. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

    DTIC Science & Technology

    This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

  9. Vibrio cholerae Classical Biotype Is Converted to the Viable Non-Culturable State when Cultured with the El Tor Biotype

    PubMed Central

    Pradhan, Subhra; Mallick, Sanjaya K.; Chowdhury, Rukhsana

    2013-01-01

    A unique event in bacterial epidemiology was the emergence of the El Tor biotype of Vibrio cholerae O1 and the subsequent rapid displacement of the existing classical biotype as the predominant cause of epidemic cholera. We demonstrate that when the El Tor and classical biotypes were cocultured in standard laboratory medium a precipitous decline in colony forming units (CFU) of the classical biotype occurred in a contact dependent manner. Several lines of evidence including DNA release, microscopy and flow cytometric analysis indicated that the drastic reduction in CFU of the classical biotype in cocultures was not accompanied by lysis, although when the classical biotype was grown individually in monocultures, lysis of the cells occurred concomitant with decrease in CFU starting from late stationary phase. Furthermore, uptake of a membrane potential sensitive dye and protection of genomic DNA from extracellular DNase strongly suggested that the classical biotype cells in cocultures retained viability in spite of loss of culturability. These results suggest that coculturing the classical biotype with the El Tor biotype protects the former from lysis allowing the cells to remain viable in spite of the loss of culturability. The stationary phase sigma factor RpoS may have a role in the loss of culturability of the classical biotype in cocultures. Although competitive exclusion of closely related strains has been reported for several bacterial species, conversion of the target bacterial population to the viable non-culturable state has not been demonstrated previously and may have important implications in the evolution of bacterial strains. PMID:23326443

  10. Identification of natural frequencies and modal damping ratios of aerospace structures from response data

    NASA Technical Reports Server (NTRS)

    Michalopoulos, C. D.

    1976-01-01

    An analysis of one and multidegree of freedom systems with classical damping is presented. Definition and minimization of error functions for each system are discussed. Systems with classical and nonclassical normal modes are studied, and results for first order perturbation are given. An alternative method of matching power spectral densities is provided, and numerical results are reviewed.

  11. Bildung and Subject Didactics: Exploring a Classical Concept for Building New Insights

    ERIC Educational Resources Information Center

    Schneuwly, Bernard; Vollmer, Helmut Johannes

    2018-01-01

    In the beginning of the 19th century, Humboldt defined Bildung as both process and product of the developing person. In this contribution we discuss how this classical concept may be used for defining subject didactics. We use two complementary approaches to answer it: a historical analysis, and the construction of a theoretical model. 1)…

  12. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  13. Relating renormalizability of D-dimensional higher-order electromagnetic and gravitational models to the classical potential at the origin

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Correia, Gilson; de Brito, Gustavo P.; de Almeida, José; Herdy, Wallace

    2017-03-01

    Simple prescriptions for computing the D-dimensional classical potential related to electromagnetic and gravitational models, based on the functional generator, are built out. These recipes are employed afterward as a support for probing the premise that renormalizable higher-order systems have a finite classical potential at the origin. It is also shown that the opposite of the conjecture above is not true. In other words, if a higher-order model is renormalizable, it is necessarily endowed with a finite classical potential at the origin, but the reverse of this statement is untrue. The systems used to check the conjecture were D-dimensional fourth-order Lee-Wick electrodynamics, and the D-dimensional fourth- and sixth-order gravity models. A special attention is devoted to New Massive Gravity (NMG) since it was the analysis of this model that inspired our surmise. In particular, we made use of our premise to resolve trivially the issue of the renormalizability of NMG, which was initially considered to be renormalizable, but it was shown some years later to be non-renormalizable. We remark that our analysis is restricted to local models in which the propagator has simple and real poles.

  14. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    PubMed

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value

  15. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  16. Breathing and Singing: Objective Characterization of Breathing Patterns in Classical Singers

    PubMed Central

    Salomoni, Sauro; van den Hoorn, Wolbert; Hodges, Paul

    2016-01-01

    Singing involves distinct respiratory kinematics (i.e. movements of rib cage and abdomen) to quiet breathing because of different demands on the respiratory system. Professional classical singers often advocate for the advantages of an active control of the abdomen on singing performance. This is presumed to prevent shortening of the diaphragm, elevate the rib cage, and thus promote efficient generation of subglottal pressure during phonation. However, few studies have investigated these patterns quantitatively and inter-subject variability has hindered the identification of stereotypical patterns of respiratory kinematics. Here, seven professional classical singers and four untrained individuals were assessed during quiet breathing, and when singing both a standard song and a piece of choice. Several parameters were extracted from respiratory kinematics and airflow, and principal component analysis was used to identify typical patterns of respiratory kinematics. No group differences were observed during quiet breathing. During singing, both groups adapted to rhythmical constraints with decreased time of inspiration and increased peak airflow. In contrast to untrained individuals, classical singers used greater percentage of abdominal contribution to lung volume during singing and greater asynchrony between movements of rib cage and abdomen. Classical singers substantially altered the coordination of rib cage and abdomen during singing from that used for quiet breathing. Despite variations between participants, principal component analysis revealed consistent pre-phonatory inward movements of the abdominal wall during singing. This contrasted with untrained individuals, who demonstrated synchronous respiratory movements during all tasks. The inward abdominal movements observed in classical singers elevates intra-abdominal pressure and may increase the length and the pressure-generating capacity of rib cage expiratory muscles for potential improvements in voice quality. PMID:27159498

  17. Breathing and Singing: Objective Characterization of Breathing Patterns in Classical Singers.

    PubMed

    Salomoni, Sauro; van den Hoorn, Wolbert; Hodges, Paul

    2016-01-01

    Singing involves distinct respiratory kinematics (i.e. movements of rib cage and abdomen) to quiet breathing because of different demands on the respiratory system. Professional classical singers often advocate for the advantages of an active control of the abdomen on singing performance. This is presumed to prevent shortening of the diaphragm, elevate the rib cage, and thus promote efficient generation of subglottal pressure during phonation. However, few studies have investigated these patterns quantitatively and inter-subject variability has hindered the identification of stereotypical patterns of respiratory kinematics. Here, seven professional classical singers and four untrained individuals were assessed during quiet breathing, and when singing both a standard song and a piece of choice. Several parameters were extracted from respiratory kinematics and airflow, and principal component analysis was used to identify typical patterns of respiratory kinematics. No group differences were observed during quiet breathing. During singing, both groups adapted to rhythmical constraints with decreased time of inspiration and increased peak airflow. In contrast to untrained individuals, classical singers used greater percentage of abdominal contribution to lung volume during singing and greater asynchrony between movements of rib cage and abdomen. Classical singers substantially altered the coordination of rib cage and abdomen during singing from that used for quiet breathing. Despite variations between participants, principal component analysis revealed consistent pre-phonatory inward movements of the abdominal wall during singing. This contrasted with untrained individuals, who demonstrated synchronous respiratory movements during all tasks. The inward abdominal movements observed in classical singers elevates intra-abdominal pressure and may increase the length and the pressure-generating capacity of rib cage expiratory muscles for potential improvements in voice quality.

  18. Mangiferin inhibits macrophage classical activation via downregulating interferon regulatory factor 5 expression.

    PubMed

    Wei, Zhiquan; Yan, Li; Chen, Yixin; Bao, Chuanhong; Deng, Jing; Deng, Jiagang

    2016-08-01

    Mangiferin is a natural polyphenol and the predominant effective component of Mangifera indica Linn. leaves. For hundreds of years, Mangifera indica Linn. leaf has been used as an ingredient in numerous traditional Chinese medicine preparations for the treatment of bronchitis. However, the pharmacological mechanism of mangiferin in the treatment of bronchitis remains to be elucidated. Macrophage classical activation is important role in the process of bronchial airway inflammation, and interferon regulatory factor 5 (IRF5) has been identified as a key regulatory factor for macrophage classical activation. The present study used the THP‑1 human monocyte cell line to investigate whether mangiferin inhibits macrophage classical activation via suppressing IRF5 expression in vitro. THP‑1 cells were differentiated to macrophages by phorbol 12‑myristate 13‑acetate. Macrophages were polarized to M1 macrophages following stimulation with lipopolysaccharide (LPS)/interferon‑γ (IFN‑γ). Flow cytometric analysis was conducted to detect the M1 macrophages. Reverse transcription‑quantitative polymerase chain reaction was used to investigate cellular IRF5 gene expression. Levels of proinflammatory cytokines and IRF5 were assessed following cell culture and cellular homogenization using enzyme‑linked immunosorbent assay. IRF5 protein and nuclei co‑localization was performed in macrophages with laser scanning confocal microscope immunofluorescence analysis. The results of the present study demonstrated that mangiferin significantly inhibits LPS/IFN‑γ stimulation‑induced classical activation of macrophages in vitro and markedly decreases proinflammatory cytokine release. In addition, cellular IRF5 expression was markedly downregulated. These results suggest that the inhibitory effect of mangiferin on classical activation of macrophages may be exerted via downregulation of cellular IRF5 expression levels.

  19. Mangiferin inhibits macrophage classical activation via downregulating interferon regulatory factor 5 expression

    PubMed Central

    Wei, Zhiquan; Yan, Li; Chen, Yixin; Bao, Chuanhong; Deng, Jing; Deng, Jiagang

    2016-01-01

    Mangiferin is a natural polyphenol and the predominant effective component of Mangifera indica Linn. leaves. For hundreds of years, Mangifera indica Linn. leaf has been used as an ingredient in numerous traditional Chinese medicine preparations for the treatment of bronchitis. However, the pharmacological mechanism of mangiferin in the treatment of bronchitis remains to be elucidated. Macrophage classical activation is important role in the process of bronchial airway inflammation, and interferon regulatory factor 5 (IRF5) has been identified as a key regulatory factor for macrophage classical activation. The present study used the THP-1 human monocyte cell line to investigate whether mangiferin inhibits macrophage classical activation via suppressing IRF5 expression in vitro. THP-1 cells were differentiated to macrophages by phorbol 12-myristate 13-acetate. Macrophages were polarized to M1 macrophages following stimulation with lipopolysaccharide (LPS)/interferon-γ (IFN-γ). Flow cytometric analysis was conducted to detect the M1 macrophages. Reverse transcription-quantitative polymerase chain reaction was used to investigate cellular IRF5 gene expression. Levels of proinflammatory cytokines and IRF5 were assessed following cell culture and cellular homogenization using enzyme-linked immunosorbent assay. IRF5 protein and nuclei co-localization was performed in macrophages with laser scanning confocal microscope immunofluorescence analysis. The results of the present study demonstrated that mangiferin significantly inhibits LPS/IFN-γ stimulation-induced classical activation of macrophages in vitro and markedly decreases proinflammatory cytokine release. In addition, cellular IRF5 expression was markedly downregulated. These results suggest that the inhibitory effect of mangiferin on classical activation of macrophages may be exerted via downregulation of cellular IRF5 expression levels. PMID:27277156

  20. Water quality parameter measurement using spectral signatures

    NASA Technical Reports Server (NTRS)

    White, P. E.

    1973-01-01

    Regression analysis is applied to the problem of measuring water quality parameters from remote sensing spectral signature data. The equations necessary to perform regression analysis are presented and methods of testing the strength and reliability of a regression are described. An efficient algorithm for selecting an optimal subset of the independent variables available for a regression is also presented.

  1. Improvement of Quench Factor Analysis in Phase and Hardness Prediction of a Quenched Steel

    NASA Astrophysics Data System (ADS)

    Kianezhad, M.; Sajjadi, S. A.

    2013-05-01

    The accurate prediction of alloys' properties introduced by heat treatment has been considered by many researchers. The advantages of such predictions are reduction of test trails and materials' consumption as well as time and energy saving. One of the most important methods to predict hardness in quenched steel parts is Quench Factor Analysis (QFA). Classical QFA is based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation. In this study, a modified form of the QFA based on the work by Rometsch et al. is compared with the classical QFA, and they are applied to prediction of hardness of steels. For this purpose, samples of CK60 steel were utilized as raw material. They were austenitized at 1103 K (830 °C). After quenching in different environments, they were cut and their hardness was determined. In addition, the hardness values of the samples were fitted using the classical and modified equations for the quench factor analysis and the results were compared. Results showed a significant improvement in fitted values of the hardness and proved the higher efficiency of the new method.

  2. Experimental use of a laser as one of the methods of physiotherapeutical treatment in lumbosacral rachialgia

    NASA Astrophysics Data System (ADS)

    Jagielski, Jerzy

    1995-03-01

    The author presents the initial comparative investigations on treatment with laser beam, diadynamic current, and combined operation of diadynamic currents and classic massage. The investigations were performed in three groups on 78 patients. The obtained results indicate great therapeutic effectiveness of laser biostimulation, better than in other treatments and a larger scope of indications for treatment with that method. According to the modified Laitinen's scale the obtained improvement was from the range of two points, corresponding to strong pain, to 0.3 points, almost complete regression of the pain.

  3. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    PubMed

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  5. Quantum-mechanical machinery for rational decision-making in classical guessing game

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung

    2016-02-01

    In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.

  6. Quantum-mechanical machinery for rational decision-making in classical guessing game

    PubMed Central

    Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung

    2016-01-01

    In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences. PMID:26875685

  7. Quantum-mechanical machinery for rational decision-making in classical guessing game.

    PubMed

    Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S; Lee, Jinhyoung

    2016-02-15

    In quantum game theory, one of the most intriguing and important questions is, "Is it possible to get quantum advantages without any modification of the classical game?" The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call 'reasoning') to generate the best strategy, which may occur internally, e.g., in the player's brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.

  8. Investigation of the marked and long-standing spatial inhomogeneity of the Hungarian suicide rate: a spatial regression approach.

    PubMed

    Balint, Lajos; Dome, Peter; Daroczi, Gergely; Gonda, Xenia; Rihmer, Zoltan

    2014-02-01

    In the last century Hungary had astonishingly high suicide rates characterized by marked regional within-country inequalities, a spatial pattern which has been quite stable over time. To explain the above phenomenon at the level of micro-regions (n=175) in the period between 2005 and 2011. Our dependent variable was the age and gender standardized mortality ratio (SMR) for suicide while explanatory variables were factors which are supposed to influence suicide risk, such as measures of religious and political integration, travel time accessibility of psychiatric services, alcohol consumption, unemployment and disability pensionery. When applying the ordinary least squared regression model, the residuals were found to be spatially autocorrelated, which indicates the violation of the assumption on the independence of error terms and - accordingly - the necessity of application of a spatial autoregressive (SAR) model to handle this problem. According to our calculations the SARlag model was a better way (versus the SARerr model) of addressing the problem of spatial autocorrelation, furthermore its substantive meaning is more convenient. SMR was significantly associated with the "political integration" variable in a negative and with "lack of religious integration" and "disability pensionery" variables in a positive manner. Associations were not significant for the remaining explanatory variables. Several important psychiatric variables were not available at the level of micro-regions. We conducted our analysis on aggregate data. Our results may draw attention to the relevance and abiding validity of the classic Durkheimian suicide risk factors - such as lack of social integration - apropos of the spatial pattern of Hungarian suicides. © 2013 Published by Elsevier B.V.

  9. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS

    PubMed Central

    Almquist, Zack W.; Butts, Carter T.

    2015-01-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach. PMID:26120218

  10. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS.

    PubMed

    Almquist, Zack W; Butts, Carter T

    2014-08-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.

  11. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    PubMed

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  12. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

  13. Finite-block-length analysis in classical and quantum information theory.

    PubMed

    Hayashi, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.

  14. Finite-block-length analysis in classical and quantum information theory

    PubMed Central

    HAYASHI, Masahito

    2017-01-01

    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962

  15. Multiscale Modeling of Fracture in an SiO2 Nanorod

    NASA Astrophysics Data System (ADS)

    Mallik, Aditi

    2005-11-01

    The fracture of a 108 particle SiO2 nanorod under uniaxial strain is described using an NDDO quantum mechanics. The stress -- strain curve to failure is calculated as a function of strain rate to show a domain that is independent of strain rate. A pair potential for use in classical MD is constructed such that the elastic portion of the quantum curve is reproduced. However, it is shown that the classical analysis does not describe accurately the large strain behavior and failure. Finally, a composite rod is constructed with a small subsystem described by quantum mechanics and the remainder described by classical MD ^1. The stress -- strain curves for the classical, quantum, and composite rods are compared and contrasted. 1. ``Multiscale Modeling of Materials -- Concepts and Illustration'', A. Mallik, K. Runge, J. Dufty, and H-P Cheng, cond-mat 0507558.

  16. Integral approximations to classical diffusion and smoothed particle hydrodynamics

    DOE PAGES

    Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.

    2014-12-31

    The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less

  17. Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan T.

    2012-01-01

    Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…

  18. A Quality Assessment Tool for Non-Specialist Users of Regression Analysis

    ERIC Educational Resources Information Center

    Argyrous, George

    2015-01-01

    This paper illustrates the use of a quality assessment tool for regression analysis. It is designed for non-specialist "consumers" of evidence, such as policy makers. The tool provides a series of questions such consumers of evidence can ask to interrogate regression analysis, and is illustrated with reference to a recent study published…

  19. A retrospective analysis to identify the factors affecting infection in patients undergoing chemotherapy.

    PubMed

    Park, Ji Hyun; Kim, Hyeon-Young; Lee, Hanna; Yun, Eun Kyoung

    2015-12-01

    This study compares the performance of the logistic regression and decision tree analysis methods for assessing the risk factors for infection in cancer patients undergoing chemotherapy. The subjects were 732 cancer patients who were receiving chemotherapy at K university hospital in Seoul, Korea. The data were collected between March 2011 and February 2013 and were processed for descriptive analysis, logistic regression and decision tree analysis using the IBM SPSS Statistics 19 and Modeler 15.1 programs. The most common risk factors for infection in cancer patients receiving chemotherapy were identified as alkylating agents, vinca alkaloid and underlying diabetes mellitus. The logistic regression explained 66.7% of the variation in the data in terms of sensitivity and 88.9% in terms of specificity. The decision tree analysis accounted for 55.0% of the variation in the data in terms of sensitivity and 89.0% in terms of specificity. As for the overall classification accuracy, the logistic regression explained 88.0% and the decision tree analysis explained 87.2%. The logistic regression analysis showed a higher degree of sensitivity and classification accuracy. Therefore, logistic regression analysis is concluded to be the more effective and useful method for establishing an infection prediction model for patients undergoing chemotherapy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  1. REGRESSION ANALYSIS OF SEA-SURFACE-TEMPERATURE PATTERNS FOR THE NORTH PACIFIC OCEAN.

    DTIC Science & Technology

    SEA WATER, *SURFACE TEMPERATURE, *OCEANOGRAPHIC DATA, PACIFIC OCEAN, REGRESSION ANALYSIS , STATISTICAL ANALYSIS, UNDERWATER EQUIPMENT, DETECTION, UNDERWATER COMMUNICATIONS, DISTRIBUTION, THERMAL PROPERTIES, COMPUTERS.

  2. The “shape” and “meaning” of the roof arts in Chinese classical architecture

    NASA Astrophysics Data System (ADS)

    Li, Xianda; liu, Yu

    2017-04-01

    This paper takes the “roof” in Chinese classical architecture as the research object. The breakthrough point of this paper would be the perspective of design aesthetics. Through the rational and perceptual analysis of the roof art, this paper would reveal that the roof shape has the double artistic features: “beauty of shape” and “beauty of idea”. This paper would have a comprehensive analysis for the following aspects: the rational method of roof construction, the emotional feeling of the roof construction and the implied meaning of beauty in the roof construction.

  3. Some comments on Hurst exponent and the long memory processes on capital markets

    NASA Astrophysics Data System (ADS)

    Sánchez Granero, M. A.; Trinidad Segovia, J. E.; García Pérez, J.

    2008-09-01

    The analysis of long memory processes in capital markets has been one of the topics in finance, since the existence of the market memory could implicate the rejection of an efficient market hypothesis. The study of these processes in finance is realized through Hurst exponent and the most classical method applied is R/S analysis. In this paper we will discuss the efficiency of this methodology as well as some of its more important modifications to detect the long memory. We also propose the application of a classical geometrical method with short modifications and we compare both approaches.

  4. Effectiveness of plasma lyso-Gb3 as a biomarker for selecting high-risk patients with Fabry disease from multispecialty clinics for genetic analysis.

    PubMed

    Maruyama, Hiroki; Miyata, Kaori; Mikame, Mariko; Taguchi, Atsumi; Guili, Chu; Shimura, Masaru; Murayama, Kei; Inoue, Takeshi; Yamamoto, Saori; Sugimura, Koichiro; Tamita, Koichi; Kawasaki, Toshihiro; Kajihara, Jun; Onishi, Akifumi; Sugiyama, Hitoshi; Sakai, Teiko; Murata, Ichijiro; Oda, Takamasa; Toyoda, Shigeru; Hanawa, Kenichiro; Fujimura, Takeo; Ura, Shigehisa; Matsumura, Mimiko; Takano, Hideki; Yamashita, Satoshi; Matsukura, Gaku; Tazawa, Ryushi; Shiga, Tsuyoshi; Ebato, Mio; Satoh, Hiroshi; Ishii, Satoshi

    2018-03-15

    PurposePlasma globotriaosylsphingosine (lyso-Gb3) is a promising secondary screening biomarker for Fabry disease. Here, we examined its applicability as a primary screening biomarker for classic and late-onset Fabry disease in males and females.MethodsBetween 1 July 2014 and 31 December 2015, we screened 2,360 patients (1,324 males) referred from 169 Japanese specialty clinics (cardiology, nephrology, neurology, and pediatrics), based on clinical symptoms suggestive of Fabry disease. We used the plasma lyso-Gb3 concentration, α-galactosidase A (α-Gal A) activity, and analysis of the α-Gal A gene (GLA) for primary and secondary screens, respectively.ResultsOf 8 males with elevated lyso-Gb3 levels (≥2.0 ng ml -1 ) and low α-Gal A activity (≤4.0 nmol h -1  ml -1 ), 7 presented a GLA mutation (2 classic and 5 late-onset). Of 15 females with elevated lyso-Gb3, 7 displayed low α-Gal A activity (5 with GLA mutations; 4 classic and 1 late-onset) and 8 exhibited normal α-Gal A activity (1 with a classic GLA mutation and 3 with genetic variants of uncertain significance).ConclusionPlasma lyso-Gb3 is a potential primary screening biomarker for classic and late-onset Fabry disease probands.Genet Med advance online publication, 15 March 2018; doi:10.1038/gim.2018.31.

  5. Evidence-based Frameworks for Teaching and Learning in Classical Singing Training: A Systematic Review.

    PubMed

    Crocco, Laura; Madill, Catherine J; McCabe, Patricia

    2017-01-01

    The study systematically reviews evidence-based frameworks for teaching and learning of classical singing training. This is a systematic review. A systematic literature search of 15 electronic databases following the Preferred Reporting Items for Systematic Reviews (PRISMA) guidelines was conducted. Eligibility criteria included type of publication, participant characteristics, intervention, and report of outcomes. Quality rating scales were applied to support assessment of the included literature. Data analysis was conducted using meta-aggregation. Nine papers met the inclusion criteria. No complete evidence-based teaching and learning framework was found. Thematic content analysis showed that studies either (1) identified teaching practices in one-to-one lessons, (2) identified student learning strategies in one-to-one lessons or personal practice sessions, and (3) implemented a tool to enhance one specific area of teaching and learning in lessons. The included studies showed that research in music education is not always specific to musical genre or instrumental group, with four of the nine studies including participant teachers and students of classical voice training only. The overall methodological quality ratings were low. Research in classical singing training has not yet developed an evidence-based framework for classical singing training. This review has found that introductory information on teaching and learning practices has been provided, and tools have been suggested for use in the evaluation of the teaching-learning process. High-quality methodological research designs are needed. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Burning mechanism and regression rate of RX-35-AU and RX-35-AV as a function of HMX particle size measured by the hybrid closed bomb-strand burner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, W.C.; Costantino, M.S.; Ornellas, D.L.

    1990-04-01

    In this study, the average surface regression rate of two HMX-based cast explosives, RX-35-AU and RX-35-AV, is measured to pressures above 750 MPa using a hybrid closed bomb-strand burner. The hybrid design allows the simultaneous measurement of pressure and regression rate over a large range of pressures in each experiment. Nitroglycerin/Triacetin (75/25) and polyethylene glycol (PEG) are used as the energetic plasticizer and polymeric binder, respectively, in both formulations. The HMX solids loading in each formulation is 50 wt %, consisting of a narrow particle size distribution of 6--8 {mu}m for RX-35-AU and 150--177 {mu}m for RX-35-AV. Of special interestmore » are the regression rate and burning mechanism as a function of the initial particle size distribution and the mechanical properties fo the cast explosives. In general, the regression rate for the larger particle size formulation, RX-35-AV, is two to three times faster compared to that for RX-35-AU. Up to 750 MPa and independent of the initial confinement pressure, RX-35-AU exhibits a planar burning mechanism with the regression rate obeying the classical aP{sup n} formalism. For RX-35-AV, however, the burning behavior is erratic for samples ignited at 200 MPa confinement pressure. At confinement pressures above 400 MPa, the regression exhibits more of a planar burning mechanism. The unstable combustion behavior for RX-35-AV at lower confinement pressures is related to several mechanisms: (1) an abrupt increase in surface area due to particle fracture and subsequent translation and rotation, resulting in debonding and creating porosity, (2) thixotropic'' separation of the binder and nitramine, causing the significantly greater fracture damage to the nitramine during the loading cycle, (3) microscopic damage to the nitramine crystals that increase its intrinsic burning rate. 12 refs., 8 figs., 2 tabs.« less

  7. The process and utility of classification and regression tree methodology in nursing research

    PubMed Central

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-01-01

    Aim This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Background Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Design Discussion paper. Data sources English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984–2013. Discussion Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Implications for Nursing Research Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Conclusion Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. PMID:24237048

  8. The process and utility of classification and regression tree methodology in nursing research.

    PubMed

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-06-01

    This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  9. Advantages of the net benefit regression framework for economic evaluations of interventions in the workplace: a case study of the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders.

    PubMed

    Hoch, Jeffrey S; Dewa, Carolyn S

    2014-04-01

    Economic evaluations commonly accompany trials of new treatments or interventions; however, regression methods and their corresponding advantages for the analysis of cost-effectiveness data are not well known. To illustrate regression-based economic evaluation, we present a case study investigating the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders. We implement net benefit regression to illustrate its strengths and limitations. Net benefit regression offers a simple option for cost-effectiveness analyses of person-level data. By placing economic evaluation in a regression framework, regression-based techniques can facilitate the analysis and provide simple solutions to commonly encountered challenges. Economic evaluations of person-level data (eg, from a clinical trial) should use net benefit regression to facilitate analysis and enhance results.

  10. Harmonic oscillators and resonance series generated by a periodic unstable classical orbit

    NASA Technical Reports Server (NTRS)

    Kazansky, A. K.; Ostrovsky, Valentin N.

    1995-01-01

    The presence of an unstable periodic classical orbit allows one to introduce the decay time as a purely classical magnitude: inverse of the Lyapunov index which characterizes the orbit instability. The Uncertainty Relation gives the corresponding resonance width which is proportional to the Planck constant. The more elaborate analysis is based on the parabolic equation method where the problem is effectively reduced to the multidimensional harmonic oscillator with the time-dependent frequency. The resonances form series in the complex energy plane which is equidistant in the direction perpendicular to the real axis. The applications of the general approach to various problems in atomic physics are briefly exposed.

  11. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  12. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  13. A Classical Test Theory Analysis of the Light and Spectroscopy Concept Inventory National Study Data Set

    ERIC Educational Resources Information Center

    Schlingman, Wayne M.; Prather, Edward E.; Wallace, Colin S.; Brissenden, Gina; Rudolph, Alexander L.

    2012-01-01

    This paper is the first in a series of investigations into the data from the recent national study using the Light and Spectroscopy Concept Inventory (LSCI). In this paper, we use classical test theory to form a framework of results that will be used to evaluate individual item difficulties, item discriminations, and the overall reliability of the…

  14. Conveying the Complex: Updating U.S. Joint Systems Analysis Doctrine with Complexity Theory

    DTIC Science & Technology

    2013-12-10

    screech during a public address, or sustain and amplify it during a guitar solo. Since the systems are nonlinear, understanding cause and effect... Classics , 2007), 12. 34 those frames.58 A technique to cope with the potentially confusing...Reynolds, Paul Davidson. A Primer in Theory Construction. Boston: Allyn and Bacon Classics , 2007. Riolo, Rick L. “The Effects and Evolution of Tag

  15. CADDIS Volume 4. Data Analysis: Basic Analyses

    EPA Pesticide Factsheets

    Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.

  16. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  17. Completely automated open-path FT-IR spectrometry.

    PubMed

    Griffiths, Peter R; Shao, Limin; Leytem, April B

    2009-01-01

    Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention.

  18. Age-dependent risk factors for malnutrition in traumatology and orthopedic patients.

    PubMed

    Lambert, Christine; Nüssler, Andreas; Biesalski, Hans Konrad; Freude, Thomas; Bahrs, Christian; Ochs, Gunnar; Flesch, Ingo; Stöckle, Ulrich; Ihle, Christoph

    2017-05-01

    The aim of this study was to investigate the prevalence of risk of malnutrition (RoM) in an orthopedic and traumatology patient cohort with a broad range of ages. In addition to the classical indicators for risk assessment (low body mass index, weight loss, and comorbidity), this study aimed to analyze the effects of lifestyle factors (eating pattern, smoking, physical activity) on RoM. The prospective cohort study included 1053 patients in a level 1 trauma center in Germany. RoM was assessed by Nutritional Risk Screening (NRS) 2002 and for the elderly additionally by Mini Nutritional Assessment (MNA). Age-dependent risk factors identified in univariate statistical analysis were used for multivariate logistic regression models. The prevalence of patients at RoM (NRS ≥3) was 22%. In the three age categories (<50 y, 50-69 y, and ≥70 y), loss of appetite, weight loss, number of comorbidities, drugs and gastrointestinal symptoms significantly increased RoM in univariate statistical analysis. In patients ages ≥70 y, several disease- and lifestyle-related factors (not living at home, less frequent consumption of vegetables and whole meal bread, low physical activity, and smoking) were associated with RoM. Multivariate logistic regression model for the total study population identified weight loss (odds ratio [OR], 6.09; 95% confidence interval [CI], 4.14-8.83), loss of appetite (OR, 3.81; 95% CI, 2.52-5.78), age-specific low BMI (OR, 1.87; 95% CI, 1.18-2.97), number of drugs taken (OR, 1.19; 95% CI, 1.12-1.26), age (OR, 1.03; 95% CI, 1.02-1.04), and days per week with vegetable consumption (OR, 0.938; 95% CI, 0.89-0.99) as risk factors. Malnutrition in trauma and orthopedic patients is not only a problem related to age. Lifestyle-related factors also contribute significantly to malnutrition in geriatric patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Quantum-classical correspondence for the inverted oscillator

    NASA Astrophysics Data System (ADS)

    Maamache, Mustapha; Ryeol Choi, Jeong

    2017-11-01

    While quantum-classical correspondence for a system is a very fundamental problem in modern physics, the understanding of its mechanism is often elusive, so the methods used and the results of detailed theoretical analysis have been accompanied by active debate. In this study, the differences and similarities between quantum and classical behavior for an inverted oscillator have been analyzed based on the description of a complete generalized Airy function-type quantum wave solution. The inverted oscillator model plays an important role in several branches of cosmology and particle physics. The quantum wave packet of the system is composed of many sub-packets that are localized at different positions with regular intervals between them. It is shown from illustrations of the probability density that, although the quantum trajectory of the wave propagation is somewhat different from the corresponding classical one, the difference becomes relatively small when the classical excitation is sufficiently high. We have confirmed that a quantum wave packet moving along a positive or negative direction accelerates over time like a classical wave. From these main interpretations and others in the text, we conclude that our theory exquisitely illustrates quantum and classical correspondence for the system, which is a crucial concept in quantum mechanics. Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1A09919503)

  20. Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012

    PubMed Central

    Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade

    2017-01-01

    ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752

Top