Sample records for simple regression analysis

  1. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  2. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  3. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  4. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  5. Teaching the Concept of Breakdown Point in Simple Linear Regression.

    ERIC Educational Resources Information Center

    Chan, Wai-Sum

    2001-01-01

    Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…

  6. Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.

    ERIC Educational Resources Information Center

    Waugh, C. Keith

    This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…

  7. Advantages of the net benefit regression framework for economic evaluations of interventions in the workplace: a case study of the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders.

    PubMed

    Hoch, Jeffrey S; Dewa, Carolyn S

    2014-04-01

    Economic evaluations commonly accompany trials of new treatments or interventions; however, regression methods and their corresponding advantages for the analysis of cost-effectiveness data are not well known. To illustrate regression-based economic evaluation, we present a case study investigating the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders. We implement net benefit regression to illustrate its strengths and limitations. Net benefit regression offers a simple option for cost-effectiveness analyses of person-level data. By placing economic evaluation in a regression framework, regression-based techniques can facilitate the analysis and provide simple solutions to commonly encountered challenges. Economic evaluations of person-level data (eg, from a clinical trial) should use net benefit regression to facilitate analysis and enhance results.

  8. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.

    PubMed

    Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.

  9. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages

    PubMed Central

    Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668

  10. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  11. The Variance Normalization Method of Ridge Regression Analysis.

    ERIC Educational Resources Information Center

    Bulcock, J. W.; And Others

    The testing of contemporary sociological theory often calls for the application of structural-equation models to data which are inherently collinear. It is shown that simple ridge regression, which is commonly used for controlling the instability of ordinary least squares regression estimates in ill-conditioned data sets, is not a legitimate…

  12. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  13. A population-based study on the association between rheumatoid arthritis and voice problems.

    PubMed

    Hah, J Hun; An, Soo-Youn; Sim, Songyong; Kim, So Young; Oh, Dong Jun; Park, Bumjung; Kim, Sung-Gyun; Choi, Hyo Geun

    2016-07-01

    The objective of this study was to investigate whether rheumatoid arthritis increases the frequency of organic laryngeal lesions and the subjective voice complaint rate in those with no organic laryngeal lesion. We performed a cross-sectional study using the data from 19,368 participants (418 rheumatoid arthritis patients and 18,950 controls) of the 2008-2011 Korea National Health and Nutrition Examination Survey. The associations between rheumatoid arthritis and organic laryngeal lesions/subjective voice complaints were analyzed using simple/multiple logistic regression analysis with complex sample adjusting for confounding factors, including age, sex, smoking status, stress level, and body mass index, which could provoke voice problems. Vocal nodules, vocal polyp, and vocal palsy were not associated with rheumatoid arthritis in a multiple regression analysis, and only laryngitis showed a positive association (adjusted odds ratio, 1.59; 95 % confidence interval, 1.01-2.52; P = 0.047). Rheumatoid arthritis was associated with subjective voice discomfort in a simple regression analysis, but not in a multiple regression analysis. Participants with rheumatoid arthritis were older, more often female, and had higher stress levels than those without rheumatoid arthritis. These factors were associated with subjective voice complaints in both simple and multiple regression analyses. Rheumatoid arthritis was not associated with organic laryngeal diseases except laryngitis. Rheumatoid arthritis did not increase the odds ratio for subjective voice complaints. Voice problems in participants with rheumatoid arthritis originated from the characteristics of the rheumatoid arthritis group (higher mean age, female sex, and stress level) rather than rheumatoid arthritis itself.

  14. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  15. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  16. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  17. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  18. Prediction of hearing outcomes by multiple regression analysis in patients with idiopathic sudden sensorineural hearing loss.

    PubMed

    Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki

    2014-12-01

    This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.

  19. No evidence of reaction time slowing in autism spectrum disorder.

    PubMed

    Ferraro, F Richard

    2016-01-01

    A total of 32 studies comprising 238 simple reaction time and choice reaction time conditions were examined in individuals with autism spectrum disorder (n = 964) and controls (n = 1032). A Brinley plot/multiple regression analysis was performed on mean reaction times, regressing autism spectrum disorder performance onto the control performance as a way to examine any generalized simple reaction time/choice reaction time slowing exhibited by the autism spectrum disorder group. The resulting regression equation was Y (autism spectrum disorder) = 0.99 × (control) + 87.93, which accounted for 92.3% of the variance. These results suggest that there are little if any simple reaction time/choice reaction time slowing in this sample of individual with autism spectrum disorder, in comparison with controls. While many cognitive and information processing domains are compromised in autism spectrum disorder, it appears that simple reaction time/choice reaction time remain relatively unaffected in autism spectrum disorder. © The Author(s) 2014.

  20. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

    2006-08-01

    A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

  1. A Practical Model for Forecasting New Freshman Enrollment during the Application Period.

    ERIC Educational Resources Information Center

    Paulsen, Michael B.

    1989-01-01

    A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)

  2. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    PubMed

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  3. Parental education predicts change in intelligence quotient after childhood epilepsy surgery.

    PubMed

    Meekes, Joost; van Schooneveld, Monique M J; Braams, Olga B; Jennekens-Schinkel, Aag; van Rijen, Peter C; Hendriks, Marc P H; Braun, Kees P J; van Nieuwenhuizen, Onno

    2015-04-01

    To know whether change in the intelligence quotient (IQ) of children who undergo epilepsy surgery is associated with the educational level of their parents. Retrospective analysis of data obtained from a cohort of children who underwent epilepsy surgery between January 1996 and September 2010. We performed simple and multiple regression analyses to identify predictors associated with IQ change after surgery. In addition to parental education, six variables previously demonstrated to be associated with IQ change after surgery were included as predictors: age at surgery, duration of epilepsy, etiology, presurgical IQ, reduction of antiepileptic drugs, and seizure freedom. We used delta IQ (IQ 2 years after surgery minus IQ shortly before surgery) as the primary outcome variable, but also performed analyses with pre- and postsurgical IQ as outcome variables to support our findings. To validate the results we performed simple regression analysis with parental education as the predictor in specific subgroups. The sample for regression analysis included 118 children (60 male; median age at surgery 9.73 years). Parental education was significantly associated with delta IQ in simple regression analysis (p = 0.004), and also contributed significantly to postsurgical IQ in multiple regression analysis (p = 0.008). Additional analyses demonstrated that parental education made a unique contribution to prediction of delta IQ, that is, it could not be replaced by the illness-related variables. Subgroup analyses confirmed the association of parental education with IQ change after surgery for most groups. Children whose parents had higher education demonstrate on average a greater increase in IQ after surgery and a higher postsurgical--but not presurgical--IQ than children whose parents completed at most lower secondary education. Parental education--and perhaps other environmental variables--should be considered in the prognosis of cognitive function after childhood epilepsy surgery. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  4. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  5. No Evidence of Reaction Time Slowing in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Ferraro, F. Richard

    2016-01-01

    A total of 32 studies comprising 238 simple reaction time and choice reaction time conditions were examined in individuals with autism spectrum disorder (n?=?964) and controls (n?=?1032). A Brinley plot/multiple regression analysis was performed on mean reaction times, regressing autism spectrum disorder performance onto the control performance as…

  6. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  7. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    NASA Astrophysics Data System (ADS)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  8. Analytics For Distracted Driver Behavior Modeling in Dilemma Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jan-Mou; Malikopoulos, Andreas; Thakur, Gautam

    2014-01-01

    In this paper, we present the results obtained and insights gained through the analysis of TRB contest data. We used exploratory analysis, regression, and clustering models for gaining insights into the driver behavior in a dilemma zone while driving under distraction. While simple exploratory analysis showed the distinguishing driver behavior patterns among different popu- lation groups in the dilemma zone, regression analysis showed statically signification relationships between groups of variables. In addition to analyzing the contest data, we have also looked into the possible impact of distracted driving on the fuel economy.

  9. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  10. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  11. Detection of outliers in the response and explanatory variables of the simple circular regression model

    NASA Astrophysics Data System (ADS)

    Mahmood, Ehab A.; Rana, Sohel; Hussin, Abdul Ghapor; Midi, Habshah

    2016-06-01

    The circular regression model may contain one or more data points which appear to be peculiar or inconsistent with the main part of the model. This may be occur due to recording errors, sudden short events, sampling under abnormal conditions etc. The existence of these data points "outliers" in the data set cause lot of problems in the research results and the conclusions. Therefore, we should identify them before applying statistical analysis. In this article, we aim to propose a statistic to identify outliers in the both of the response and explanatory variables of the simple circular regression model. Our proposed statistic is robust circular distance RCDxy and it is justified by the three robust measurements such as proportion of detection outliers, masking and swamping rates.

  12. Effect of Ankle Range of Motion (ROM) and Lower-Extremity Muscle Strength on Static Balance Control Ability in Young Adults: A Regression Analysis

    PubMed Central

    Kim, Seong-Gil

    2018-01-01

    Background The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. Material/Methods This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. Results In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). Conclusions Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement. PMID:29760375

  13. Effect of Ankle Range of Motion (ROM) and Lower-Extremity Muscle Strength on Static Balance Control Ability in Young Adults: A Regression Analysis.

    PubMed

    Kim, Seong-Gil; Kim, Wan-Soo

    2018-05-15

    BACKGROUND The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. MATERIAL AND METHODS This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. RESULTS In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). CONCLUSIONS Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement.

  14. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  15. Learning investment indicators through data extension

    NASA Astrophysics Data System (ADS)

    Dvořák, Marek

    2017-07-01

    Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.

  16. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  17. Forecasting Air Force Logistics Command Second Destination Transportation: An Application of Multiple Regression Analysis and Neural Networks

    DTIC Science & Technology

    1990-09-01

    without the help from the DSXR staff. William Lyons, Charles Ramsey , and Martin Meeks went above and beyond to help complete this research. Special...develop a valid forecasting model that is significantly more accurate than the one presently used by DSXR and suggested the development and testing of a...method, Strom tested DSXR’s iterative linear regression forecasting technique by examining P1 in the simple regression equation to determine whether

  18. The Digital Shoreline Analysis System (DSAS) Version 4.0 - An ArcGIS extension for calculating shoreline change

    USGS Publications Warehouse

    Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan

    2009-01-01

    The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.

  19. Advanced Statistics for Exotic Animal Practitioners.

    PubMed

    Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G

    2017-09-01

    Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Understanding logistic regression analysis.

    PubMed

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  1. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  2. Regression: The Apple Does Not Fall Far From the Tree.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  3. Chicken barn climate and hazardous volatile compounds control using simple linear regression and PID

    NASA Astrophysics Data System (ADS)

    Abdullah, A. H.; Bakar, M. A. A.; Shukor, S. A. A.; Saad, F. S. A.; Kamis, M. S.; Mustafa, M. H.; Khalid, N. S.

    2016-07-01

    The hazardous volatile compounds from chicken manure in chicken barn are potentially to be a health threat to the farm animals and workers. Ammonia (NH3) and hydrogen sulphide (H2S) produced in chicken barn are influenced by climate changes. The Electronic Nose (e-nose) is used for the barn's air, temperature and humidity data sampling. Simple Linear Regression is used to identify the correlation between temperature-humidity, humidity-ammonia and ammonia-hydrogen sulphide. MATLAB Simulink software was used for the sample data analysis using PID controller. Results shows that the performance of PID controller using the Ziegler-Nichols technique can improve the system controller to control climate in chicken barn.

  4. Survival analysis: Part I — analysis of time-to-event

    PubMed Central

    2018-01-01

    Length of time is a variable often encountered during data analysis. Survival analysis provides simple, intuitive results concerning time-to-event for events of interest, which are not confined to death. This review introduces methods of analyzing time-to-event. The Kaplan-Meier survival analysis, log-rank test, and Cox proportional hazards regression modeling method are described with examples of hypothetical data. PMID:29768911

  5. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  6. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    PubMed

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  7. Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.

    NASA Technical Reports Server (NTRS)

    Ohring, G.

    1972-01-01

    Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.

  8. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math.

    PubMed

    Raizada, Rajeev D S; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D; Ansari, Daniel; Kuhl, Patricia K

    2010-05-15

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain-behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain-behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain-behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  9. Linking brain-wide multivoxel activation patterns to behaviour: Examples from language and math

    PubMed Central

    Raizada, Rajeev D.S.; Tsao, Feng-Ming; Liu, Huei-Mei; Holloway, Ian D.; Ansari, Daniel; Kuhl, Patricia K.

    2010-01-01

    A key goal of cognitive neuroscience is to find simple and direct connections between brain and behaviour. However, fMRI analysis typically involves choices between many possible options, with each choice potentially biasing any brain–behaviour correlations that emerge. Standard methods of fMRI analysis assess each voxel individually, but then face the problem of selection bias when combining those voxels into a region-of-interest, or ROI. Multivariate pattern-based fMRI analysis methods use classifiers to analyse multiple voxels together, but can also introduce selection bias via data-reduction steps as feature selection of voxels, pre-selecting activated regions, or principal components analysis. We show here that strong brain–behaviour links can be revealed without any voxel selection or data reduction, using just plain linear regression as a classifier applied to the whole brain at once, i.e. treating each entire brain volume as a single multi-voxel pattern. The brain–behaviour correlations emerged despite the fact that the classifier was not provided with any information at all about subjects' behaviour, but instead was given only the neural data and its condition-labels. Surprisingly, more powerful classifiers such as a linear SVM and regularised logistic regression produce very similar results. We discuss some possible reasons why the very simple brain-wide linear regression model is able to find correlations with behaviour that are as strong as those obtained on the one hand from a specific ROI and on the other hand from more complex classifiers. In a manner which is unencumbered by arbitrary choices, our approach offers a method for investigating connections between brain and behaviour which is simple, rigorous and direct. PMID:20132896

  10. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  11. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  12. Obscure phenomena in statistical analysis of quantitative structure-activity relationships. Part 1: Multicollinearity of physicochemical descriptors.

    PubMed

    Mager, P P; Rothe, H

    1990-10-01

    Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.

  13. Model Estimation Using Ridge Regression with the Variance Normalization Criterion. Interim Report No. 2. The Education and Inequality in Canada Project.

    ERIC Educational Resources Information Center

    Lee, Wan-Fung; Bulcock, Jeffrey Wilson

    The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…

  14. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  15. A Simple and Specific Stability- Indicating RP-HPLC Method for Routine Assay of Adefovir Dipivoxil in Bulk and Tablet Dosage Form.

    PubMed

    Darsazan, Bahar; Shafaati, Alireza; Mortazavi, Seyed Alireza; Zarghi, Afshin

    2017-01-01

    A simple and reliable stability-indicating RP-HPLC method was developed and validated for analysis of adefovir dipivoxil (ADV).The chromatographic separation was performed on a C 18 column using a mixture of acetonitrile-citrate buffer (10 mM at pH 5.2) 36:64 (%v/v) as mobile phase, at a flow rate of 1.5 mL/min. Detection was carried out at 260 nm and a sharp peak was obtained for ADV at a retention time of 5.8 ± 0.01 min. No interferences were observed from its stress degradation products. The method was validated according to the international guidelines. Linear regression analysis of data for the calibration plot showed a linear relationship between peak area and concentration over the range of 0.5-16 μg/mL; the regression coefficient was 0.9999and the linear regression equation was y = 24844x-2941.3. The detection (LOD) and quantification (LOQ) limits were 0.12 and 0.35 μg/mL, respectively. The results proved the method was fast (analysis time less than 7 min), precise, reproducible, and accurate for analysis of ADV over a wide range of concentration. The proposed specific method was used for routine quantification of ADV in pharmaceutical bulk and a tablet dosage form.

  16. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  17. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  18. Surgery for left ventricular aneurysm: early and late survival after simple linear repair and endoventricular patch plasty.

    PubMed

    Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig

    2004-09-01

    Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.

  19. Optimal Reflectance, Transmittance, and Absorptance Wavebands and Band Ratios for the Estimation of Leaf Chlorophyll Concentration

    NASA Technical Reports Server (NTRS)

    Carter, Gregory A.; Spiering, Bruce A.

    2000-01-01

    The present study utilized regression analysis to identify: wavebands and band ratios within the 400-850 nm range that could be used to estimate total chlorophyll concentration with minimal error; and simple regression models that were most effective in estimating chlorophyll concentrations were measured for two broadleaved species, a broadleaved vine, a needle-leaved conifer, and a representative of the grass family.Overall, reflectance, transmittance, and absorptance corresponded most precisely with chlorophyll concentration at wavelengths near 700 nm, although regressions were strong as well in the 550-625 nm range.

  20. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  1. Neurophysiological correlates of depressive symptoms in young adults: A quantitative EEG study.

    PubMed

    Lee, Poh Foong; Kan, Donica Pei Xin; Croarkin, Paul; Phang, Cheng Kar; Doruk, Deniz

    2018-01-01

    There is an unmet need for practical and reliable biomarkers for mood disorders in young adults. Identifying the brain activity associated with the early signs of depressive disorders could have important diagnostic and therapeutic implications. In this study we sought to investigate the EEG characteristics in young adults with newly identified depressive symptoms. Based on the initial screening, a total of 100 participants (n = 50 euthymic, n = 50 depressive) underwent 32-channel EEG acquisition. Simple logistic regression and C-statistic were used to explore if EEG power could be used to discriminate between the groups. The strongest EEG predictors of mood using multivariate logistic regression models. Simple logistic regression analysis with subsequent C-statistics revealed that only high-alpha and beta power originating from the left central cortex (C3) have a reliable discriminative value (ROC curve >0.7 (70%)) for differentiating the depressive group from the euthymic group. Multivariate regression analysis showed that the single most significant predictor of group (depressive vs. euthymic) is the high-alpha power over C3 (p = 0.03). The present findings suggest that EEG is a useful tool in the identification of neurophysiological correlates of depressive symptoms in young adults with no previous psychiatric history. Our results could guide future studies investigating the early neurophysiological changes and surrogate outcomes in depression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  3. Estimation of standard liver volume in Chinese adult living donors.

    PubMed

    Fu-Gui, L; Lu-Nan, Y; Bo, L; Yong, Z; Tian-Fu, W; Ming-Qing, X; Wen-Tao, W; Zhe-Yu, C

    2009-12-01

    To determine a formula predicting the standard liver volume based on body surface area (BSA) or body weight in Chinese adults. A total of 115 consecutive right-lobe living donors not including the middle hepatic vein underwent right hemi-hepatectomy. No organs were used from prisoners, and no subjects were prisoners. Donor anthropometric data including age, gender, body weight, and body height were recorded prospectively. The weights and volumes of the right lobe liver grafts were measured at the back table. Liver weights and volumes were calculated from the right lobe graft weight and volume obtained at the back table, divided by the proportion of the right lobe on computed tomography. By simple linear regression analysis and stepwise multiple linear regression analysis, we correlated calculated liver volume and body height, body weight, or body surface area. The subjects had a mean age of 35.97 +/- 9.6 years, and a female-to-male ratio of 60:55. The mean volume of the right lobe was 727.47 +/- 136.17 mL, occupying 55.59% +/- 6.70% of the whole liver by computed tomography. The volume of the right lobe was 581.73 +/- 96.137 mL, and the estimated liver volume was 1053.08 +/- 167.56 mL. Females of the same body weight showed a slightly lower liver weight. By simple linear regression analysis and stepwise multiple linear regression analysis, a formula was derived based on body weight. All formulae except the Hong Kong formula overestimated liver volume compared to this formula. The formula of standard liver volume, SLV (mL) = 11.508 x body weight (kg) + 334.024, may be applied to estimate liver volumes in Chinese adults.

  4. Quantifying prosthetic gait deviation using simple outcome measures

    PubMed Central

    Kark, Lauren; Odell, Ross; McIntosh, Andrew S; Simmons, Anne

    2016-01-01

    AIM: To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis (3DGA). METHODS: Eight unilateral, transfemoral amputees and 12 unilateral, transtibial amputees were recruited. Twenty-eight able-bodied controls were recruited. All participants underwent 3DGA, the timed-up-and-go test and the six-minute walk test (6MWT). The lower-limb amputees also completed the Prosthesis Evaluation Questionnaire. Results from 3DGA were summarised using the gait deviation index (GDI), which was subsequently regressed, using stepwise regression, against the other measures. RESULTS: Step-length (SL), self-selected walking speed (SSWS) and the distance walked during the 6MWT (6MWD) were significantly correlated with GDI. The 6MWD was the strongest, single predictor of the GDI, followed by SL and SSWS. The predictive ability of the regression equations were improved following inclusion of self-report data related to mobility and prosthetic utility. CONCLUSION: This study offers a practicable alternative to quantifying kinematic deviation without the need to conduct complete 3DGA. PMID:27335814

  5. Simple models for estimating local removals of timber in the northeast

    Treesearch

    David N. Larsen; David A. Gansner

    1975-01-01

    Provides a practical method of estimating subregional removals of timber and demonstrates its application to a typical problem. Stepwise multiple regression analysis is used to develop equations for estimating removals of softwood, hardwood, and all timber from selected characteristics of socioeconomic structure.

  6. Statistical Tutorial | Center for Cancer Research

    Cancer.gov

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean differences, simple and multiple linear regression, ANOVA tests, and Chi-Squared distribution.

  7. A sampling study on rock properties affecting drilling rate index (DRI)

    NASA Astrophysics Data System (ADS)

    Yenice, Hayati; Özdoğan, Mehmet V.; Özfırat, M. Kemal

    2018-05-01

    Drilling rate index (DRI) developed in Norway is a very useful index in determining the drillability of rocks and even in performance prediction of hard rock TBMs and it requires special laboratory test equipment. Drillability is one of the most important subjects in rock excavation. However, determining drillability index from physical and mechanical properties of rocks is very important for practicing engineers such as underground excavation, drilling operations in open pit mining, underground mining and natural stone production. That is why many researchers have studied concerned with drillability to find the correlations between drilling rate index (DRI) and penetration rate, influence of geological properties on drillability prediction in tunneling, correlations between rock properties and drillability. In this study, the relationships between drilling rate index (DRI) and some physico-mechanical properties (Density, Shore hardness, uniaxial compressive strength (UCS, σc), Indirect tensile strength (ITS, σt)) of three different rock groups including magmatic, sedimentary and metamorphic were evaluated using both simple and multiple regression analysis. This study reveals the effects of rock properties on DRI according to different types of rocks. In simple regression, quite high correlations were found between DRI and uniaxial compressive strength (UCS) and also between DRI and indirect tensile strength (ITS) values. Multiple regression analyses revealed even higher correlations when compared to simple regression. Especially, UCS, ITS, Shore hardness (SH) and the interactions between them were found to be very effective on DRI values.

  8. Solar energy distribution over Egypt using cloudiness from Meteosat photos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosalam Shaltout, M.A.; Hassen, A.H.

    1990-01-01

    In Egypt, there are 10 ground stations for measuring the global solar radiation, and five stations for measuring the diffuse solar radiation. Every day at noon, the Meteorological Authority in Cairo receives three photographs of cloudiness over Egypt from the Meteosat satellite, one in the visible, and two in the infra-red bands (10.5-12.5 {mu}m) and (5.7-7.1 {mu}m). The monthly average cloudiness for 24 sites over Egypt are measured and calculated from Meteosat observations during the period 1985-1986. Correlation analysis between the cloudiness observed by Meteosat and global solar radiation measured from the ground stations is carried out. It is foundmore » that, the correlation coefficients are about 0.90 for the simple linear regression, and increase for the second and third degree regressions. Also, the correlation coefficients for the cloudiness with the diffuse solar radiation are about 0.80 for the simple linear regression, and increase for the second and third degree regression. Models and empirical relations for estimating the global and diffuse solar radiation from Meteosat cloudiness data over Egypt are deduced and tested. Seasonal maps for the global and diffuse radiation over Egypt are carried out.« less

  9. Using Simple Linear Regression to Assess the Success of the Montreal Protocol in Reducing Atmospheric Chlorofluorocarbons

    ERIC Educational Resources Information Center

    Nelson, Dean

    2009-01-01

    Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…

  10. An analysis of ratings: A guide to RMRATE

    Treesearch

    Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink

    1990-01-01

    This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...

  11. SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES

    PubMed Central

    Zhu, Liping; Huang, Mian; Li, Runze

    2012-01-01

    This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536

  12. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    PubMed Central

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  13. Length bias correction in gene ontology enrichment analysis using logistic regression.

    PubMed

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  14. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  15. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  16. Is math anxiety in the secondary classroom limiting physics mastery? A study of math anxiety and physics performance

    NASA Astrophysics Data System (ADS)

    Mercer, Gary J.

    This quantitative study examined the relationship between secondary students with math anxiety and physics performance in an inquiry-based constructivist classroom. The Revised Math Anxiety Rating Scale was used to evaluate math anxiety levels. The results were then compared to the performance on a physics standardized final examination. A simple correlation was performed, followed by a multivariate regression analysis to examine effects based on gender and prior math background. The correlation showed statistical significance between math anxiety and physics performance. The regression analysis showed statistical significance for math anxiety, physics performance, and prior math background, but did not show statistical significance for math anxiety, physics performance, and gender.

  17. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  18. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  19. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  20. Mediation Effects of Internet Addiction on Shame and Social Networking

    ERIC Educational Resources Information Center

    Dogan, Ugur; Kaya, Sinem

    2016-01-01

    A survey of 488 college students was conducted in Turkey to investigate the relationship between social network usage, shame and Internet addiction. It was hypothesized that a relationship between shame and social network usage was mediated by Internet addiction. First of all, according to simple regression analysis, it was found that shame…

  1. Genetic diversity studies and identification of SSR markers associated with Fusarium wilt (Fusarium udum) resistance in cultivated pigeonpea (Cajanus cajan).

    PubMed

    Singh, A K; Rai, V P; Chand, R; Singh, R P; Singh, M N

    2013-01-01

    Genetic diversity and identification of simple sequence repeat markers correlated with Fusarium wilt resistance was performed in a set of 36 elite cultivated pigeonpea genotypes differing in levels of resistance to Fusarium wilt. Twenty-four polymorphic sequence repeat markers were screened across these genotypes, and amplified a total of 59 alleles with an average high polymorphic information content value of 0.52. Cluster analysis, done by UPGMA and PCA, grouped the 36 pigeonpea genotypes into two main clusters according to their Fusarium wilt reaction. Based on the Kruskal-Wallis ANOVA and simple regression analysis, six simple sequence repeat markers were found to be significantly associated with Fusarium wilt resistance. The phenotypic variation explained by these markers ranged from 23.7 to 56.4%. The present study helps in finding out feasibility of prescreened SSR markers to be used in genetic diversity analysis and their potential association with disease resistance.

  2. Predicting acute pain after cesarean delivery using three simple questions.

    PubMed

    Pan, Peter H; Tonidandel, Ashley M; Aschenbrenner, Carol A; Houle, Timothy T; Harris, Lynne C; Eisenach, James C

    2013-05-01

    Interindividual variability in postoperative pain presents a clinical challenge. Preoperative quantitative sensory testing is useful but time consuming in predicting postoperative pain intensity. The current study was conducted to develop and validate a predictive model of acute postcesarean pain using a simple three-item preoperative questionnaire. A total of 200 women scheduled for elective cesarean delivery under subarachnoid anesthesia were enrolled (192 subjects analyzed). Patients were asked to rate the intensity of loudness of audio tones, their level of anxiety and anticipated pain, and analgesic need from surgery. Postoperatively, patients reported the intensity of evoked pain. Regression analysis was performed to generate a predictive model for pain from these measures. A validation cohort of 151 women was enrolled to test the reliability of the model (131 subjects analyzed). Responses from each of the three preoperative questions correlated moderately with 24-h evoked pain intensity (r = 0.24-0.33, P < 0.001). Audio tone rating added uniquely, but minimally, to the model and was not included in the predictive model. The multiple regression analysis yielded a statistically significant model (R = 0.20, P < 0.001), whereas the validation cohort showed reliably a very similar regression line (R = 0.18). In predicting the upper 20th percentile of evoked pain scores, the optimal cut point was 46.9 (z =0.24) such that sensitivity of 0.68 and specificity of 0.67 were as balanced as possible. This simple three-item questionnaire is useful to help predict postcesarean evoked pain intensity, and could be applied to further research and clinical application to tailor analgesic therapy to those who need it most.

  3. New diagnostic index for sarcopenia in patients with cardiovascular diseases

    PubMed Central

    Kai, Hisashi; Shibata, Rei; Niiyama, Hiroshi; Nishiyama, Yasuhiro; Murohara, Toyoaki; Yoshida, Noriko; Katoh, Atsushi; Ikeda, Hisao

    2017-01-01

    Background Sarcopenia is an aging and disease-related syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, with the risk of frailty and poor quality of life. Sarcopenia is diagnosed by a decrease in skeletal muscle index (SMI) and reduction of either handgrip strength or gait speed. However, measurement of SMI is difficult for general physicians because it requires special equipment for bioelectrical impedance assay or dual-energy X-ray absorptiometry. The purpose of this study was, therefore, to explore a novel, simple diagnostic method of sarcopenia evaluation in patients with cardiovascular diseases (CVD). Methods We retrospectively investigated 132 inpatients with CVD (age: 72±12 years, age range: 27–93 years, males: 61%) Binomial logistic regression and correlation analyses were used to assess the associations of sarcopenia with simple physical data and biomarkers, including muscle-related inflammation makers and nutritional markers. Results Sarcopenia was present in 29.5% of the study population. Serum concentrations of adiponectin and sialic acid were significantly higher in sarcopenic than non-sarcopenic CVD patients. Stepwise multivariate binomial logistic regression analysis revealed that adiponectin, sialic acid, sex, age, and body mass index were independent factors for sarcopenia detection. Sarcopenia index, obtained from the diagnostic regression formula for sarcopenia detection including the five independent factors, indicated a high accuracy in ROC curve analysis (sensitivity 94.9%, specificity 69.9%) and the cutoff value for sarcopenia detection was -1.6134. Sarcopenia index had a significant correlation with the conventional diagnostic parameters of sarcopenia. Conclusions Our new sarcopenia index using simple parameters would be useful for diagnosing sarcopenia in CVD patients. PMID:28542531

  4. Meteorological adjustment of yearly mean values for air pollutant concentration comparison

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Neustadter, H. E.

    1976-01-01

    Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.

  5. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  6. Logistic regression for risk factor modelling in stuttering research.

    PubMed

    Reed, Phil; Wu, Yaqionq

    2013-06-01

    To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  8. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  9. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    PubMed

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.

  10. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  11. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  12. Early Literacy Skills and English Language Learners: An Analysis of Students in a Title I School

    ERIC Educational Resources Information Center

    Ostayan, Jennifer R.

    2016-01-01

    This article examined student literacy assessments in light of students' levels of English language proficiency. The study supported the hypotheses that a student's level of language proficiency positively predicted their DIBELS Composite score at the beginning, middle, and end of kindergarten by utilizing a simple linear regression. An ANOVA…

  13. Techniques of data analysis and presentation for planners of the metropolitan environment

    Treesearch

    Joelee Normand

    1977-01-01

    Relationships between the characteristics of the physical environment of a metropolitan area and the activities of its human inhabitants can be used to predict probable future dynamic trends, both demographic and environmental. Using simple linear regression, we were able to highlight several dynamic features of the metropolitan area of Tulsa, Oklahoma. Computer movies...

  14. Helping Students Assess the Relative Importance of Different Intermolecular Interactions

    ERIC Educational Resources Information Center

    Jasien, Paul G.

    2008-01-01

    A semi-quantitative model has been developed to estimate the relative effects of dispersion, dipole-dipole interactions, and H-bonding on the normal boiling points ("T[subscript b]") for a subset of simple organic systems. The model is based upon a statistical analysis using multiple linear regression on a series of straight-chain organic…

  15. Analyzing industrial energy use through ordinary least squares regression models

    NASA Astrophysics Data System (ADS)

    Golden, Allyson Katherine

    Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and production behavior, and identify opportunities for energy and cost savings. This thesis study also utilizes change-point and degree-day baseline energy models to disaggregate facility annual energy consumption into separate industrial end-user categories. The baseline energy model provides a suitable and economical alternative to sub-metering individual manufacturing equipment. One case study describes the conjoined use of baseline energy models and facility information gathered during a one-day onsite visit to perform an end-point energy analysis of an injection molding facility conducted by the Alabama Industrial Assessment Center. Applying baseline regression model results to the end-point energy analysis allowed the AIAC to better approximate the annual energy consumption of the facility's HVAC system.

  16. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    PubMed

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  17. National Trends of Simple Prostatectomy for Benign Prostatic Hyperplasia With an Analysis of Risk Factors for Adverse Perioperative Outcomes.

    PubMed

    Pariser, Joseph J; Pearce, Shane M; Patel, Sanjay G; Bales, Gregory T

    2015-10-01

    To examine the national trends of simple prostatectomy (SP) for benign prostatic hyperplasia (BPH) focusing on perioperative outcomes and risk factors for complications. The National Inpatient Sample (2002-2012) was utilized to identify patients with BPH undergoing SP. Analysis included demographics, hospital details, associated procedures, and operative approach (open, robotic, or laparoscopic). Outcomes included complications, length of stay, charges, and mortality. Multivariate logistic regression was used to determine the risk factors for perioperative complications. Linear regression was used to assess the trends in the national annual utilization of SP. The study population included 35,171 patients. Median length of stay was 4 days (interquartile range 3-6). Cystolithotomy was performed concurrently in 6041 patients (17%). The overall complication rate was 28%, with bleeding occurring most commonly. In total, 148 (0.4%) patients experienced in-hospital mortality. On multivariate analysis, older age, black race, and overall comorbidity were associated with greater risk of complications while the use of a minimally invasive approach and concurrent cystolithotomy had a decreased risk. Over the study period, the national use of simple prostatectomy decreased, on average, by 145 cases per year (P = .002). By 2012, 135/2580 procedures (5%) were performed using a minimally invasive approach. The nationwide utilization of SP for BPH has decreased. Bleeding complications are common, but perioperative mortality is low. Patients who are older, black race, or have multiple comorbidities are at higher risk of complications. Minimally invasive approaches, which are becoming increasingly utilized, may reduce perioperative morbidity. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Tensile properties of cooked meat sausages and their correlation with texture profile analysis (TPA) parameters and physico-chemical characteristics.

    PubMed

    Herrero, A M; de la Hoz, L; Ordóñez, J A; Herranz, B; Romero de Ávila, M D; Cambero, M I

    2008-11-01

    The possibilities of using breaking strength (BS) and energy to fracture (EF) for monitoring textural properties of some cooked meat sausages (chopped, mortadella and galantines) were studied. Texture profile analysis (TPA), folding test and physico-chemical measurements were also performed. Principal component analysis enabled these meat products to be grouped into three textural profiles which showed significant (p<0.05) differences mainly for BS, hardness, adhesiveness and cohesiveness. Multivariate analysis indicated that BS, EF and TPA parameters were correlated (p<0.05) for every individual meat product (chopped, mortadella and galantines) and all products together. On the basis of these results, TPA parameters could be used for constructing regression models to predict BS. The resulting regression model for all cooked meat products was BS=-0.160+6.600∗cohesiveness-1.255∗adhesiveness+0.048∗hardness-506.31∗springiness (R(2)=0.745, p<0.00005). Simple linear regression analysis showed significant coefficients of determination between BS (R(2)=0.586, p<0.0001) versus folding test grade (FG) and EF versus FG (R(2)=0.564, p<0.0001).

  19. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  20. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    PubMed

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  1. Information-Decay Pursuit of Dynamic Parameters in Student Models

    DTIC Science & Technology

    1994-04-01

    simple worked-through example). Commercially available computer programs for structuring and using Bayesian inference include ERGO ( Noetic Systems...Tukey, J.W. (1977). Data analysis and Regression: A second course in statistics. Reading, MA: Addison-Wesley. Noetic Systems, Inc. (1991). ERGO...Naval Academy Division of Educational Studies Annapolis MD 21402-5002 Elmory Univerity Dr Janice Gifford 210 Fiabburne Bldg University of

  2. Personal Best Time, Percent Body Fat, and Training Are Differently Associated with Race Time for Male and Female Ironman Triathletes

    ERIC Educational Resources Information Center

    Knechtle, Beat; Wirth, Andrea; Baumann, Barbara; Knechtle, Patrizia; Rosemann, Thomas

    2010-01-01

    We studied male and female nonprofessional Ironman triathletes to determine whether percent body fat, training, and/or previous race experience were associated with race performance. We used simple linear regression analysis, with total race time as the dependent variable, to investigate the relationship among athletes' percent body fat, average…

  3. A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults

    PubMed Central

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni

    2015-01-01

    Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960

  4. A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.

    PubMed

    Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni

    2015-01-01

    Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.

  5. Determination of cellulose I crystallinity by FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2009-01-01

    Two new methods based on FT-Raman spectroscopy, one simple, based on band intensity ratio, and the other, using a partial least-squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in semicrystalline cellulose I samples was determined based on univariate regression that was first developed using the...

  6. Comparing lagged linear correlation, lagged regression, Granger causality, and vector autoregression for uncovering associations in EHR data.

    PubMed

    Levine, Matthew E; Albers, David J; Hripcsak, George

    2016-01-01

    Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.

  7. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  8. On the impact of relatedness on SNP association analysis.

    PubMed

    Gross, Arnd; Tönjes, Anke; Scholz, Markus

    2017-12-06

    When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.

  9. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  10. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    PubMed

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  11. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  12. Wastewater-Based Epidemiology of Stimulant Drugs: Functional Data Analysis Compared to Traditional Statistical Methods.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen Gustav; Reid, Malcolm J; Thomas, Kevin Victor; Harman, Christopher; Røislien, Jo

    2015-01-01

    Wastewater-based epidemiology (WBE) is a new methodology for estimating the drug load in a population. Simple summary statistics and specification tests have typically been used to analyze WBE data, comparing differences between weekday and weekend loads. Such standard statistical methods may, however, overlook important nuanced information in the data. In this study, we apply functional data analysis (FDA) to WBE data and compare the results to those obtained from more traditional summary measures. We analysed temporal WBE data from 42 European cities, using sewage samples collected daily for one week in March 2013. For each city, the main temporal features of two selected drugs were extracted using functional principal component (FPC) analysis, along with simpler measures such as the area under the curve (AUC). The individual cities' scores on each of the temporal FPCs were then used as outcome variables in multiple linear regression analysis with various city and country characteristics as predictors. The results were compared to those of functional analysis of variance (FANOVA). The three first FPCs explained more than 99% of the temporal variation. The first component (FPC1) represented the level of the drug load, while the second and third temporal components represented the level and the timing of a weekend peak. AUC was highly correlated with FPC1, but other temporal characteristic were not captured by the simple summary measures. FANOVA was less flexible than the FPCA-based regression, and even showed concordance results. Geographical location was the main predictor for the general level of the drug load. FDA of WBE data extracts more detailed information about drug load patterns during the week which are not identified by more traditional statistical methods. Results also suggest that regression based on FPC results is a valuable addition to FANOVA for estimating associations between temporal patterns and covariate information.

  13. Practical Session: Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  14. Cellulose I crystallinity determination using FT-Raman spectroscopy : univariate and multivariate methods

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2010-01-01

    Two new methods based on FT–Raman spectroscopy, one simple, based on band intensity ratio, and the other using a partial least squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in cellulose I samples was determined based on univariate regression that was first developed using the Raman band...

  15. Determinants of outcomes in patients with simple gastroschisis.

    PubMed

    Youssef, Fouad; Laberge, Jean-Martin; Puligandla, Pramod; Emil, Sherif

    2017-05-01

    We analyzed the determinants of outcomes in simple gastroschisis (GS) not complicated by intestinal atresia, perforation, or necrosis. All simple GS patients enrolled in a national prospective registry from 2005 to 2013 were studied. Patients below the median for total parenteral nutrition (TPN) duration (26days) and hospital stay (34days) were compared to those above. Univariate and multivariate logistic and linear regression analyses were employed using maternal, patient, postnatal, and treatment variables. Of 700 patients with simple GS, representing 76.8% of all GS patients, 690 (98.6%) survived. TPN was used in 352 (51.6%) and 330 (48.4%) patients for ≤26 and >26days, respectively. Hospital stay for 356 (51.9%) and 330 (48.1%) infants was ≤34 and >34days, respectively. Univariate analysis revealed significant differences in several patient, treatment, and postnatal factors. On multivariate analysis, prenatal sonographic bowel dilation, older age at closure, necrotizing enterocolitis, longer mechanical ventilation, and central-line associated blood stream infection (CLABSI) were independently associated with longer TPN duration and hospital stay, with CLABSI being the strongest predictor. Prenatal bowel dilation is associated with increased morbidity in simple GS. CLABSI is the strongest predictor of outcomes. Bowel matting is not an independent risk factor. 2c. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Knowledge and Ability Factors Underlying Simple Learning by Accretion

    DTIC Science & Technology

    1989-10-01

    processing framework of Craik and Lockhart (1972) suggested that a permanent memory trace could be established without conscious effort so long as the...correlation/regression analysis for the behavioral sciences Hillsdale, NJ: Lawrence Eribaum. Craik , F.I.M., & Lockhart , R.S. (1972). Levels of...elaborative processing in associative learning, verbal analogy solution should be a significant predictor of paired associate learning. The levels of

  17. An Investigation of Multivariate Adaptive Regression Splines for Modeling and Analysis of Univariate and Semi-Multivariate Time Series Systems

    DTIC Science & Technology

    1991-09-01

    However, there is no guarantee that this would work; for instance if the data were generated by an ARCH model (Tong, 1990 pp. 116-117) then a simple...Hill, R., Griffiths, W., Lutkepohl, H., and Lee, T., Introduction to the Theory and Practice of Econometrics , 2th ed., Wiley, 1985. Kendall, M., Stuart

  18. Using Time-Series Regression to Predict Academic Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…

  19. Application of Fourier transform near-infrared spectroscopy to optimization of green tea steaming process conditions.

    PubMed

    Ono, Daiki; Bamba, Takeshi; Oku, Yuichi; Yonetani, Tsutomu; Fukusaki, Eiichiro

    2011-09-01

    In this study, we constructed prediction models by metabolic fingerprinting of fresh green tea leaves using Fourier transform near-infrared (FT-NIR) spectroscopy and partial least squares (PLS) regression analysis to objectively optimize of the steaming process conditions in green tea manufacture. The steaming process is the most important step for manufacturing high quality green tea products. However, the parameter setting of the steamer is currently determined subjectively by the manufacturer. Therefore, a simple and robust system that can be used to objectively set the steaming process parameters is necessary. We focused on FT-NIR spectroscopy because of its simple operation, quick measurement, and low running costs. After removal of noise in the spectral data by principal component analysis (PCA), PLS regression analysis was performed using spectral information as independent variables, and the steaming parameters set by experienced manufacturers as dependent variables. The prediction models were successfully constructed with satisfactory accuracy. Moreover, the results of the demonstrated experiment suggested that the green tea steaming process parameters could be predicted on a larger manufacturing scale. This technique will contribute to improvement of the quality and productivity of green tea because it can objectively optimize the complicated green tea steaming process and will be suitable for practical use in green tea manufacture. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  20. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    USGS Publications Warehouse

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  1. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  2. Assessment of the spatial scaling behaviour of floods in the United Kingdom

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Stewart, Elizabeth; Bell, Victoria

    2017-04-01

    Floods are among the most dangerous natural hazards, causing loss of life and significant damage to private and public property. Regional flood-frequency analysis (FFA) methods are essential tools to assess the flood hazard and plan interventions for its mitigation. FFA methods are often based on the well-known index flood method that assumes the invariance of the coefficient of variation of floods with drainage area. This assumption is equivalent to the simple scaling or self-similarity assumption for peak floods, i.e. their spatial structure remains similar in a particular, relatively simple, way to itself over a range of scales. Spatial scaling of floods has been evaluated at national scale for different countries such as Canada, USA, and Australia. According our knowledge. Such a study has not been conducted for the United Kingdom even though the standard FFA method there is based on the index flood assumption. In this work we present an integrated approach to assess of the spatial scaling behaviour of floods in the United Kingdom using three different methods: product moments (PM), probability weighted moments (PWM), and quantile analysis (QA). We analyse both instantaneous and daily annual observed maximum floods and performed our analysis both across the entire country and in its sub-climatic regions as defined in the Flood Studies Report (NERC, 1975). To evaluate the relationship between the k-th moments or quantiles and the drainage area we used both regression with area alone and multiple regression considering other explanatory variables to account for the geomorphology, amount of rainfall, and soil type of the catchments. The latter multiple regression approach was only recently demonstrated being more robust than the traditional regression with area alone that can lead to biased estimates of scaling exponents and misinterpretation of spatial scaling behaviour. We tested our framework on almost 600 rural catchments in UK considered as entire region and split in 11 sub-regions with 50 catchments per region on average. Preliminary results from the three different spatial scaling methods are generally in agreement and indicate that: i) only some of the peak flow variability is explained by area alone (approximately 50% for the entire country and ranging between the 40% and 70% for the sub-regions); ii) this percentage increases to 90% for the entire country and ranges between 80% and 95% for the sub-regions when the multiple regression is used; iii) the simple scaling hypothesis holds in all sub-regions with the exception of weak multi-scaling found in the regions 2 (North), and 5 and 6 (South East). We hypothesize that these deviations can be explained by heterogeneity in large scale precipitation and by the influence of the soil type (predominantly chalk) on the flood formation process in regions 5 and 6.

  3. Bankfull characteristics of Ohio streams and their relation to peak streamflows

    USGS Publications Warehouse

    Sherwood, James M.; Huitger, Carrie A.

    2005-01-01

    Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.

  4. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  5. Comparison of methods for the analysis of relatively simple mediation models.

    PubMed

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  6. Differential gene expression detection and sample classification using penalized linear regression models.

    PubMed

    Wu, Baolin

    2006-02-15

    Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.

  7. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  8. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2017-01-01

    The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.

  9. Correlation of concentration of modified cassava flour for banana fritter flour using simple linear regression

    NASA Astrophysics Data System (ADS)

    Herminiati, A.; Rahman, T.; Turmala, E.; Fitriany, C. G.

    2017-12-01

    The purpose of this study was to determine the correlation of different concentrations of modified cassava flour that was processed for banana fritter flour. The research method consists of two stages: (1) to determine the different types of flour: cassava flour, modified cassava flour-A (using the method of the lactid acid bacteria), and modified cassava flour-B (using the method of the autoclaving cooling cycle), then conducted on organoleptic test and physicochemical analysis; (2) to determine the correlation of concentration of modified cassava flour for banana fritter flour, by design was used simple linear regression. The factors were used different concentrations of modified cassava flour-B (y1) 40%, (y2) 50%, and (y3) 60%. The response in the study includes physical analysis (whiteness of flour, water holding capacity-WHC, oil holding capacity-OHC), chemical analysis (moisture content, ash content, crude fiber content, starch content), and organoleptic (color, aroma, taste, texture). The results showed that the type of flour selected from the organoleptic test was modified cassava flour-B. Analysis results of modified cassava flour-B component containing whiteness of flour 60.42%; WHC 41.17%; OHC 21.15%; moisture content 4.4%; ash content 1.75%; crude fiber content 1.86%; starch content 67.31%. The different concentrations of modified cassava flour-B with the results of the analysis provides correlation to the whiteness of flour, WHC, OHC, moisture content, ash content, crude fiber content, and starch content. The different concentrations of modified cassava flour-B does not affect the color, aroma, taste, and texture.

  10. Interpretation of commonly used statistical regression models.

    PubMed

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  11. Association between Organizational Commitment and Personality Traits of Faculty Members of Ahvaz Jundishapur University of Medical Sciences.

    PubMed

    Khiavi, Farzad Faraji; Dashti, Rezvan; Mokhtari, Saeedeh

    2016-03-01

    Individual characteristics are important factors influencing organizational commitment. Also, committed human resources can lead organizations to performance improvement as well as personal and organizational achievements. This research aimed to determine the association between organizational commitment and personality traits among faculty members of Ahvaz Jundishapur University of Medical Sciences. the research population of this cross-sectional study was the faculty members of Ahvaz Jundishapur University of Medical Sciences (Ahvaz, Iran). The sample size was determined to be 83. Data collection instruments were the Allen and Meyer questionnaire for organizational commitment and Neo for characteristics' features. The data were analyzed through Pearson's product-moment correlation and the independent samples t-test, ANOVA, and simple linear regression analysis (SLR) by SPSS. Continuance commitment showed a significant positive association with neuroticism, extroversion, agreeableness, and conscientiousness. Normative commitment showed a significant positive association with conscientiousness and a negative association with extroversion (p = 0.001). Openness had a positive association with affective commitment. Openness and agreeableness, among the five characteristics' features, had the most effect on organizational commitment, as indicated by simple linear regression analysis. Faculty members' characteristics showed a significant association with their organizational commitment. Determining appropriate characteristic criteria for faculty members may lead to employing committed personnel to accomplish the University's objectives and tasks.

  12. The Association of Free Testosterone Levels in Men and Lifestyle Factors and Chronic Disease Status: A North Texas Healthy Heart Study.

    PubMed

    Cardarelli, Roberto; Singh, Meharvan; Meyer, Jason; Balyakina, Elizabeth; Perez, Oscar; King, Michael

    2014-07-01

    Hypogonadism is highly prevalent in men older than 45 years and is associated with an increased risk of chronic diseases, including obesity, metabolic syndrome, diabetes, and cardiovascular disease. The objective of this study was to determine whether lifestyle factors such as smoking, diet, and exercise are associated with reduced testosterone levels. In this cross-sectional study, 147 men older than 44 years were recruited from a collaborative network of primary care clinics in the Dallas/Fort Worth, Texas, metropolitan area. Free testosterone levels were measured in plasma samples via an enzyme-linked immunosorbent assay-based method, and analyzed by simple and multiple linear regression in relationship to age, race/ethnicity, smoking, diet, exercise, obesity, diabetes, hypertension, and dyslipidemia. The participants had a mean free testosterone level of 3.1 ng/mL (standard deviation [SD] = 1.5) and mean age of 56.8 years (SD = 7.9). In simple regression analysis, free testosterone levels were associated with increased age (β = -0.04; P = .02), diet (β = -0.49; P = .05), diabetes (β = -0.9; P = .003), and hypertension (β = -0.55; P = .03) but not with race/ethnicity, smoking, exercise, obesity, or dyslipidemia. In multiple regression analysis, free testosterone values were significantly associated only with age (β = -0.05; P = .01) and diet (β = -0.72; P = .01). This study implicates diet, in addition to advanced age as a possible risk factor in the development of reduced testosterone levels. © The Author(s) 2014.

  13. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  14. Family practitioners' diagnostic decision-making processes regarding patients with respiratory tract infections: an observational study.

    PubMed

    Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva

    2008-01-01

    The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).

  15. Cost-effectiveness analysis of the diarrhea alleviation through zinc and oral rehydration therapy (DAZT) program in rural Gujarat India: an application of the net-benefit regression framework.

    PubMed

    Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila

    2017-01-01

    This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.

  16. Detection of epistatic effects with logic regression and a classical linear regression model.

    PubMed

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  17. An index of effluent aquatic toxicity designed by partial least squares regression, using acute and chronic tests and expert judgements.

    PubMed

    Vindimian, Éric; Garric, Jeanne; Flammarion, Patrick; Thybaud, Éric; Babut, Marc

    1999-10-01

    The evaluation of the ecotoxicity of effluents requires a battery of biological tests on several species. In order to derive a summary parameter from such a battery, a single endpoint was calculated for all the tests: the EC10, obtained by nonlinear regression, with bootstrap evaluation of the confidence intervals. Principal component analysis was used to characterize and visualize the correlation between the tests. The table of the toxicity of the effluents was then submitted to a panel of experts, who classified the effluents according to the test results. Partial least squares (PLS) regression was used to fit the average value of the experts' judgements to the toxicity data, using a simple equation. Furthermore, PLS regression on partial data sets and other considerations resulted in an optimum battery, with two chronic tests and one acute test. The index is intended to be used for the classification of effluents based on their toxicity to aquatic species. Copyright © 1999 SETAC.

  18. An index of effluent aquatic toxicity designed by partial least squares regression, using acute and chronic tests and expert judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vindimian, E.; Garric, J.; Flammarion, P.

    1999-10-01

    The evaluation of the ecotoxicity of effluents requires a battery of biological tests on several species. In order to derive a summary parameter from such a battery, a single endpoint was calculated for all the tests: the EC10, obtained by nonlinear regression, with bootstrap evaluation of the confidence intervals. Principal component analysis was used to characterize and visualize the correlation between the tests. The table of the toxicity of the effluents was then submitted to a panel of experts, who classified the effluents according to the test results. Partial least squares (PLS) regression was used to fit the average valuemore » of the experts' judgments to the toxicity data, using a simple equation. Furthermore, PLS regression on partial data sets and other considerations resulted in an optimum battery, with two chronic tests and one acute test. The index is intended to be used for the classification of effluents based on their toxicity to aquatic species.« less

  19. Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement

    NASA Astrophysics Data System (ADS)

    Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.

    2018-04-01

    Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).

  20. Identification of molecular markers associated with mite resistance in coconut (Cocos nucifera L.).

    PubMed

    Shalini, K V; Manjunatha, S; Lebrun, P; Berger, A; Baudouin, L; Pirany, N; Ranganath, R M; Prasad, D Theertha

    2007-01-01

    Coconut mite (Aceria guerreronis 'Keifer') has become a major threat to Indian coconut (Coçcos nucifera L.) cultivators and the processing industry. Chemical and biological control measures have proved to be costly, ineffective, and ecologically undesirable. Planting mite-resistant coconut cultivars is the most effective method of preventing yield loss and should form a major component of any integrated pest management stratagem. Coconut genotypes, and mite-resistant and -susceptible accessions were collected from different parts of South India. Thirty-two simple sequence repeat (SSR) and 7 RAPD primers were used for molecular analyses. In single-marker analysis, 9 SSR and 4 RAPD markers associated with mite resistance were identified. In stepwise multiple regression analysis of SSRs, a combination of 6 markers showed 100% association with mite infestation. Stepwise multiple regression analysis for RAPD data revealed that a combination of 3 markers accounted for 83.86% of mite resistance in the selected materials. Combined stepwise multiple regression analysis of RAPD and SSR data showed that a combination of 5 markers explained 100% of the association with mite resistance in coconut. Markers associated with mite resistance are important in coconut breeding programs and will facilitate the selection of mite-resistant plants at an early stage as well as mother plants for breeding programs.

  1. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  2. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  3. Mental chronometry with simple linear regression.

    PubMed

    Chen, J Y

    1997-10-01

    Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry.

  4. Velocity structure in long period variable star atmospheres

    NASA Technical Reports Server (NTRS)

    Pilachowski, C.; Wallerstein, G.; Willson, L. A.

    1980-01-01

    A regression analysis of the dependence of absorption line velocities on wavelength, line strength, excitation potential, and ionization potential is presented. The method determines the region of formation of the absorption lines for a given data and wavelength region. It is concluded that the scatter which is frequently found in velocity measurements of absorption lines in long period variables is probably the result of a shock of moderate amplitude located in or near the reversing layer and that the frequently observed correlation of velocity with excitation and ionization are a result of the velocity gradients produced by this shock in the atmosphere. A simple interpretation of the signs of the coefficients of the regression analysis is presented in terms of preshock, post shock, or across the shock, together with criteria for evaluating the validity of the fit. The amplitude of the reversing layer shock is estimated from an analysis of a series of plates for four long period variable stars along with the most probable stellar velocity for these stars.

  5. Strengths use as a secret of happiness: Another dimension of visually impaired individuals' psychological state.

    PubMed

    Matsuguma, Shinichiro; Kawashima, Motoko; Negishi, Kazuno; Sano, Fumiya; Mimura, Masaru; Tsubota, Kazuo

    2018-01-01

    It is well recognized that visual impairments (VI) worsen individuals' mental condition. However, little is known about the positive aspects including subjective happiness, positive emotions, and strengths. Therefore, the purpose of this study was to investigate the positive aspects of persons with VI including their subjective happiness, positive emotions, and strengths use. Positive aspects of persons with VI were measured using the Subjective Happiness Scale (SHS), the Scale of Positive and Negative Experience-Balance (SPANE-B), and the Strengths Use Scale (SUS). A cross-sectional analysis was utilized to examine personal information in a Tokyo sample (N = 44). We used a simple regression analysis and found significant relationships between the SHS or SPANE-B and SUS; on the contrary, VI-related variables were not correlated with them. A multiple regression analysis confirmed that SUS was a significant factor associated with both the SHS and SPANE-B. Strengths use might be a possible protective factor from the negative effects of VI.

  6. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  7. Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model

    DTIC Science & Technology

    2015-06-01

    and rewarding me with your love every day. xx THIS PAGE INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. THE UNITED STATES ARMY RECRUITING...the relationship between the calculated SAMA potential and the actual 2014 performance. The scatterplot in Figure 8 shows a strong linear... relationship between the SAMA calculated potential and the contracting achievement for 2014, with an R-squared value of 0.871. Simple Linear Regression of

  8. Effective Surfactants Blend Concentration Determination for O/W Emulsion Stabilization by Two Nonionic Surfactants by Simple Linear Regression.

    PubMed

    Hassan, A K

    2015-01-01

    In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility.

  9. THE DISTRIBUTION OF COOK’S D STATISTIC

    PubMed Central

    Muller, Keith E.; Mok, Mario Chen

    2013-01-01

    Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487

  10. Correlation Equation of Fault Size, Moment Magnitude, and Height of Tsunami Case Study: Historical Tsunami Database in Sulawesi

    NASA Astrophysics Data System (ADS)

    Julius, Musa, Admiral; Pribadi, Sugeng; Muzli, Muzli

    2018-03-01

    Sulawesi, one of the biggest island in Indonesia, located on the convergence of two macro plate that is Eurasia and Pacific. NOAA and Novosibirsk Tsunami Laboratory show more than 20 tsunami data recorded in Sulawesi since 1820. Based on this data, determination of correlation between tsunami and earthquake parameter need to be done to proved all event in the past. Complete data of magnitudes, fault sizes and tsunami heights on this study sourced from NOAA and Novosibirsk Tsunami database, completed with Pacific Tsunami Warning Center (PTWC) catalog. This study aims to find correlation between moment magnitude, fault size and tsunami height by simple regression. The step of this research are data collecting, processing, and regression analysis. Result shows moment magnitude, fault size and tsunami heights strongly correlated. This analysis is enough to proved the accuracy of historical tsunami database in Sulawesi on NOAA, Novosibirsk Tsunami Laboratory and PTWC.

  11. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    PubMed

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  12. Suppression Situations in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  13. Least Squares Moving-Window Spectral Analysis.

    PubMed

    Lee, Young Jong

    2017-08-01

    Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.

  14. Non-ignorable missingness in logistic regression.

    PubMed

    Wang, Joanna J J; Bartlett, Mark; Ryan, Louise

    2017-08-30

    Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  16. Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses

    PubMed Central

    Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.

    2017-01-01

    Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects, respectively, in resting state connectivity analyses. We illustrate these concepts using realistic simulated resting state FMRI data and in vivo data acquired in healthy subjects and patients with bipolar disorder and schizophrenia. PMID:28348512

  17. Design of a global soil moisture initialization procedure for the simple biosphere model

    NASA Technical Reports Server (NTRS)

    Liston, G. E.; Sud, Y. C.; Walker, G. K.

    1993-01-01

    Global soil moisture and land-surface evapotranspiration fields are computed using an analysis scheme based on the Simple Biosphere (SiB) soil-vegetation-atmosphere interaction model. The scheme is driven with observed precipitation, and potential evapotranspiration, where the potential evapotranspiration is computed following the surface air temperature-potential evapotranspiration regression of Thomthwaite (1948). The observed surface air temperature is corrected to reflect potential (zero soil moisture stress) conditions by letting the ratio of actual transpiration to potential transpiration be a function of normalized difference vegetation index (NDVI). Soil moisture, evapotranspiration, and runoff data are generated on a daily basis for a 10-year period, January 1979 through December 1988, using observed precipitation gridded at a 4 deg by 5 deg resolution.

  18. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  19. Is Rest Really Rest? Resting State Functional Connectivity during Rest and Motor Task Paradigms.

    PubMed

    Jurkiewicz, Michael T; Crawley, Adrian P; Mikulis, David J

    2018-04-18

    Numerous studies have identified the default mode network (DMN) within the brain of healthy individuals, which has been attributed to the ongoing mental activity of the brain during the wakeful resting-state. While engaged during specific resting-state fMRI paradigms, it remains unclear as to whether traditional block-design simple movement fMRI experiments significantly influence the default mode network or other areas. Using blood-oxygen level dependent (BOLD) fMRI we characterized the pattern of functional connectivity in healthy subjects during a resting-state paradigm and compared this to the same resting-state analysis performed on motor task data residual time courses after regressing out the task paradigm. Using seed-voxel analysis to define the DMN, the executive control network (ECN), and sensorimotor, auditory and visual networks, the resting-state analysis of the residual time courses demonstrated reduced functional connectivity in the motor network and reduced connectivity between the insula and the ECN compared to the standard resting-state datasets. Overall, performance of simple self-directed motor tasks does little to change the resting-state functional connectivity across the brain, especially in non-motor areas. This would suggest that previously acquired fMRI studies incorporating simple block-design motor tasks could be mined retrospectively for assessment of the resting-state connectivity.

  20. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  1. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    DTIC Science & Technology

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  2. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  3. On identifying relationships between the flood scaling exponent and basin attributes.

    PubMed

    Medhi, Hemanta; Tripathi, Shivam

    2015-07-01

    Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.

  4. Association between Organizational Commitment and Personality Traits of Faculty Members of Ahvaz Jundishapur University of Medical Sciences

    PubMed Central

    Khiavi, Farzad Faraji; Dashti, Rezvan; Mokhtari, Saeedeh

    2016-01-01

    Introduction Individual characteristics are important factors influencing organizational commitment. Also, committed human resources can lead organizations to performance improvement as well as personal and organizational achievements. This research aimed to determine the association between organizational commitment and personality traits among faculty members of Ahvaz Jundishapur University of Medical Sciences. Methods the research population of this cross-sectional study was the faculty members of Ahvaz Jundishapur University of Medical Sciences (Ahvaz, Iran). The sample size was determined to be 83. Data collection instruments were the Allen and Meyer questionnaire for organizational commitment and Neo for characteristics’ features. The data were analyzed through Pearson’s product-moment correlation and the independent samples t-test, ANOVA, and simple linear regression analysis (SLR) by SPSS. Results Continuance commitment showed a significant positive association with neuroticism, extroversion, agreeableness, and conscientiousness. Normative commitment showed a significant positive association with conscientiousness and a negative association with extroversion (p = 0.001). Openness had a positive association with affective commitment. Openness and agreeableness, among the five characteristics’ features, had the most effect on organizational commitment, as indicated by simple linear regression analysis. Conclusion Faculty members’ characteristics showed a significant association with their organizational commitment. Determining appropriate characteristic criteria for faculty members may lead to employing committed personnel to accomplish the University’s objectives and tasks. PMID:27123222

  5. Relationship between acute kidney injury and serum procalcitonin (PCT) concentration in critically ill patients with influenza infection.

    PubMed

    Rodríguez, A; Reyes, L F; Monclou, J; Suberviola, B; Bodí, M; Sirgo, G; Solé-Violán, J; Guardiola, J; Barahona, D; Díaz, E; Martín-Loeches, I; Restrepo, M I

    2018-02-09

    Serum procalcitonin (PCT) concentration could be increased in patients with renal dysfunction in the absence of bacterial infection. To determine the interactions among serum renal biomarkers of acute kidney injury (AKI) and serum PCT concentration, in patients admitted to the intensive care unit (ICU) due to lung influenza infection. Secondary analysis of a prospective multicentre observational study. 148 Spanish ICUs. ICU patients admitted with influenza infection without bacterial co-infection. Clinical, laboratory and hemodynamic variables were recorded. AKI was classified as AKI I or II based on creatinine (Cr) concentrations (≥1.60-2.50mg/dL and Cr≥2.51-3.99mg/dL, respectively). Patients with chronic renal disease, receiving renal replacement treatment or with Cr>4mg/dL were excluded. Spearman's correlation, simple and multiple linear regression analysis were performed. None. Out of 663 patients included in the study, 52 (8.2%) and 10 (1.6%) developed AKI I and II, respectively. Patients with AKI were significantly older, had more comorbid conditions and were more severally ill. PCT concentrations were higher in patients with AKI (2.62 [0.60-10.0]ng/mL vs. 0.40 [0.13-1.20]ng/mL, p=0.002). Weak correlations between Cr/PCT (rho=0.18) and Urea (U)/PCT (rho=0.19) were identified. Simple linear regression showed poor interaction between Cr/U and PCT concentrations (Cr R 2 =0.03 and U R 2 =0.018). Similar results were observed during multiple linear regression analysis (Cr R 2 =0.046 and U R 2 =0.013). Although PCT concentrations were slightly higher in patients with AKI, high PCT concentrations are not explained by AKI and could be warning sign of a potential bacterial infection. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  6. Prevalence of vitamin D deficiency and associated factors in women and newborns in the immediate postpartum period

    PubMed Central

    do Prado, Mara Rúbia Maciel Cardoso; Oliveira, Fabiana de Cássia Carvalho; Assis, Karine Franklin; Ribeiro, Sarah Aparecida Vieira; do Prado, Pedro Paulo; Sant'Ana, Luciana Ferreira da Rocha; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2015-01-01

    Abstract Objective: To assess the prevalence of vitamin D deficiency and its associated factors in women and their newborns in the postpartum period. Methods: This cross-sectional study evaluated vitamin D deficiency/insufficiency in 226 women and their newborns in Viçosa (Minas Gerais, BR) between December 2011 and November 2012. Cord blood and venous maternal blood were collected to evaluate the following biochemical parameters: vitamin D, alkaline phosphatase, calcium, phosphorus and parathyroid hormone. Poisson regression analysis, with a confidence interval of 95%, was applied to assess vitamin D deficiency and its associated factors. Multiple linear regression analysis was performed to identify factors associated with 25(OH)D deficiency in the newborns and women from the study. The criteria for variable inclusion in the multiple linear regression model was the association with the dependent variable in the simple linear regression analysis, considering p<0.20. Significance level was α <5%. Results: From 226 women included, 200 (88.5%) were 20-44 years old; the median age was 28 years. Deficient/insufficient levels of vitamin D were found in 192 (85%) women and in 182 (80.5%) neonates. The maternal 25(OH)D and alkaline phosphatase levels were independently associated with vitamin D deficiency in infants. Conclusions: This study identified a high prevalence of vitamin D deficiency and insufficiency in women and newborns and the association between maternal nutritional status of vitamin D and their infants' vitamin D status. PMID:26100593

  7. Skeletal height estimation from regression analysis of sternal lengths in a Northwest Indian population of Chandigarh region: a postmortem study.

    PubMed

    Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H

    2011-03-20

    Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P < 0.001). The multiple regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the subjects of known sex, as they underestimate the male and overestimate female stature. However, intercostal lengths were found to be the poor estimators of stature (P < 0.05). And also sternal lengths exhibit weaker correlation coefficients and higher standard errors of estimate. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. A Primer on Logistic Regression.

    ERIC Educational Resources Information Center

    Woldbeck, Tanya

    This paper introduces logistic regression as a viable alternative when the researcher is faced with variables that are not continuous. If one is to use simple regression, the dependent variable must be measured on a continuous scale. In the behavioral sciences, it may not always be appropriate or possible to have a measured dependent variable on a…

  9. Testing hypotheses for differences between linear regression lines

    Treesearch

    Stanley J. Zarnoch

    2009-01-01

    Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...

  10. Improving Lidar-based Aboveground Biomass Estimation with Site Productivity for Central Hardwood Forests, USA

    NASA Astrophysics Data System (ADS)

    Shao, G.; Gallion, J.; Fei, S.

    2016-12-01

    Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.

  11. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Factors associated with active commuting to work among women.

    PubMed

    Bopp, Melissa; Child, Stephanie; Campbell, Matthew

    2014-01-01

    Active commuting (AC), the act of walking or biking to work, has notable health benefits though rates of AC remain low among women. This study used a social-ecological framework to examine the factors associated with AC among women. A convenience sample of employed, working women (n = 709) completed an online survey about their mode of travel to work. Individual, interpersonal, institutional, community, and environmental influences were assessed. Basic descriptive statistics and frequencies described the sample. Simple logistic regression models examined associations with the independent variables with AC participation and multiple logistic regression analysis determined the relative influence of social ecological factors on AC participation. The sample was primarily middle-aged (44.09±11.38 years) and non-Hispanic White (92%). Univariate analyses revealed several individual, interpersonal, institutional, community and environmental factors significantly associated with AC. The multivariable logistic regression analysis results indicated that significant factors associated with AC included number of children, income, perceived behavioral control, coworker AC, coworker AC normative beliefs, employer and community supports for AC, and traffic. The results of this study contribute to the limited body of knowledge on AC participation for women and may help to inform gender-tailored interventions to enhance AC behavior and improve health.

  15. Analysis of hospital cost outcome of DHA-rich fish-oil supplementation in pregnancy: Evidence from a randomized controlled trial.

    PubMed

    Ahmed, Sharmina; Makrides, Maria; Sim, Nicholas; McPhee, Andy; Quinlivan, Julie; Gibson, Robert; Umberger, Wendy

    2015-12-01

    Recent research emphasized the nutritional benefits of omega-3 long chain polyunsaturated fatty acids (LCPUFAs) during pregnancy. Based on a double-blind randomised controlled trial named "DHA to Optimize Mother and Infant Outcome" (DOMInO), we examined how omega 3 DHA supplementation during pregnancy may affect pregnancy related in-patient hospital costs. We conducted an econometric analysis based on ordinary least square and quantile regressions with bootstrapped standard errors. Using these approaches, we also examined whether smoking, drinking, maternal age and BMI could influence the effect of DHA supplementation during pregnancy on hospital costs. Our regressions showed that in-patient hospital costs could decrease by AUD92 (P<0.05) on average per singleton pregnancy when DHA supplements were consumed during pregnancy. Our regression results also showed that the cost savings to the Australian public hospital system could be between AUD15 - AUD51 million / year. Given that a simple intervention like DHA-rich fish-oil supplementation could generate savings to the public, it may be worthwhile from a policy perspective to encourage DHA supplementation among pregnant women. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  17. Predicting the aquatic toxicity mode of action using logistic regression and linear discriminant analysis.

    PubMed

    Ren, Y Y; Zhou, L C; Yang, L; Liu, P Y; Zhao, B W; Liu, H X

    2016-09-01

    The paper highlights the use of the logistic regression (LR) method in the construction of acceptable statistically significant, robust and predictive models for the classification of chemicals according to their aquatic toxic modes of action. Essentials accounting for a reliable model were all considered carefully. The model predictors were selected by stepwise forward discriminant analysis (LDA) from a combined pool of experimental data and chemical structure-based descriptors calculated by the CODESSA and DRAGON software packages. Model predictive ability was validated both internally and externally. The applicability domain was checked by the leverage approach to verify prediction reliability. The obtained models are simple and easy to interpret. In general, LR performs much better than LDA and seems to be more attractive for the prediction of the more toxic compounds, i.e. compounds that exhibit excess toxicity versus non-polar narcotic compounds and more reactive compounds versus less reactive compounds. In addition, model fit and regression diagnostics was done through the influence plot which reflects the hat-values, studentized residuals, and Cook's distance statistics of each sample. Overdispersion was also checked for the LR model. The relationships between the descriptors and the aquatic toxic behaviour of compounds are also discussed.

  18. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    PubMed

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Survival analysis in hematologic malignancies: recommendations for clinicians

    PubMed Central

    Delgado, Julio; Pereira, Arturo; Villamor, Neus; López-Guillermo, Armando; Rozman, Ciril

    2014-01-01

    The widespread availability of statistical packages has undoubtedly helped hematologists worldwide in the analysis of their data, but has also led to the inappropriate use of statistical methods. In this article, we review some basic concepts of survival analysis and also make recommendations about how and when to perform each particular test using SPSS, Stata and R. In particular, we describe a simple way of defining cut-off points for continuous variables and the appropriate and inappropriate uses of the Kaplan-Meier method and Cox proportional hazard regression models. We also provide practical advice on how to check the proportional hazards assumption and briefly review the role of relative survival and multiple imputation. PMID:25176982

  20. Assessment of contrast-enhanced ultrasonography of the hepatic vein for detection of hemodynamic changes associated with experimentally induced portal hypertension in dogs.

    PubMed

    Morishita, Keitaro; Hiramoto, Akira; Michishita, Asuka; Takagi, Satoshi; Hoshino, Yuki; Itami, Takaharu; Lim, Sue Yee; Osuga, Tatsuyuki; Nakamura, Sayuri; Ochiai, Kenji; Nakamura, Kensuke; Ohta, Hiroshi; Yamasaki, Masahiro; Takiguchi, Mitsuyoshi

    2017-04-01

    OBJECTIVE To assess the use of contrast-enhanced ultrasonography (CEUS) of the hepatic vein for the detection of hemodynamic changes associated with experimentally induced portal hypertension in dogs. ANIMALS 6 healthy Beagles. PROCEDURES A prospective study was conducted. A catheter was surgically placed in the portal vein of each dog. Hypertension was induced by intraportal injection of microspheres (10 to 15 mg/kg) at 5-day intervals via the catheter. Microsphere injections were continued until multiple acquired portosystemic shunts were created. Portal vein pressure (PVP) was measured through the catheter. Contrast-enhanced ultrasonography was performed before and after establishment of hypertension. Time-intensity curves were generated from the region of interest in the hepatic vein. Perfusion variables measured for statistical analysis were hepatic vein arrival time, time to peak, time to peak phase (TTPP), and washout ratio. The correlation between CEUS variables and PVP was assessed by use of simple regression analysis. RESULTS Time to peak and TTPP were significantly less after induction of portal hypertension. Simple regression analysis revealed a significant negative correlation between TTPP and PVP. CONCLUSIONS AND CLINICAL RELEVANCE CEUS was useful for detecting hemodynamic changes associated with experimentally induced portal hypertension in dogs, which was characterized by a rapid increase in the intensity of the hepatic vein. Furthermore, TTPP, a time-dependent variable, provided useful complementary information for predicting portal hypertension. IMPACT FOR HUMAN MEDICINE Because the method described here induced presinusoidal portal hypertension, these results can be applied to idiopathic portal hypertension in humans.

  1. Power and sample size for multivariate logistic modeling of unmatched case-control studies.

    PubMed

    Gail, Mitchell H; Haneuse, Sebastien

    2017-01-01

    Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.

  2. Improved Design Formulae for Buckling of Orthotropic Plates under Combined Loading

    NASA Technical Reports Server (NTRS)

    Weaver, Paul M.; Nemeth, Michael P.

    2008-01-01

    Simple, accurate buckling interaction formulae are presented for long orthotropic plates with either simply supported or clamped longitudinal edges and under combined loading that are suitable for design studies. The loads include 1) combined uniaxial compression (or tension) and shear, 2) combined pure inplane bending and 3) shear and combined uniaxial compression (or tension) and pure inplane bending. The interaction formulae are the results of detailed regression analysis of buckling data obtained from a very accurate Rayleigh-Ritz method.

  3. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  4. Trends in Mortality After Primary Cytoreductive Surgery for Ovarian Cancer: A Systematic Review and Metaregression of Randomized Clinical Trials and Observational Studies.

    PubMed

    Di Donato, Violante; Kontopantelis, Evangelos; Aletti, Giovanni; Casorelli, Assunta; Piacenti, Ilaria; Bogani, Giorgio; Lecce, Francesca; Benedetti Panici, Pierluigi

    2017-06-01

    Primary cytoreductive surgery (PDS) followed by platinum-based chemotherapy is the cornerstone of treatment and the absence of residual tumor after PDS is universally considered the most important prognostic factor. The aim of the present analysis was to evaluate trend and predictors of 30-day mortality in patients undergoing primary cytoreduction for ovarian cancer. Literature was searched for records reporting 30-day mortality after PDS. All cohorts were rated for quality. Simple and multiple Poisson regression models were used to quantify the association between 30-day mortality and the following: overall or severe complications, proportion of patients with stage IV disease, median age, year of publication, and weighted surgical complexity index. Using the multiple regression model, we calculated the risk of perioperative mortality at different levels for statistically significant covariates of interest. Simple regression identified median age and proportion of patients with stage IV disease as statistically significant predictors of 30-day mortality. When included in the multiple Poisson regression model, both remained statistically significant, with an incidence rate ratio of 1.087 for median age and 1.017 for stage IV disease. Disease stage was a strong predictor, with the risk estimated to increase from 2.8% (95% confidence interval 2.02-3.66) for stage III to 16.1% (95% confidence interval 6.18-25.93) for stage IV, for a cohort with a median age of 65 years. Metaregression demonstrated that increased age and advanced clinical stage were independently associated with an increased risk of mortality, and the combined effects of both factors greatly increased the risk.

  5. Shape information from glucose curves: Functional data analysis compared with traditional summary measures

    PubMed Central

    2013-01-01

    Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294

  6. Shape information from glucose curves: functional data analysis compared with traditional summary measures.

    PubMed

    Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B

    2013-01-17

    Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3  hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.

  7. The impact of mental state disorder and personality on social functioning in patients engaged in community mental health care.

    PubMed

    Newton-Howes, Giles

    2014-02-01

    The aim of this study was to assess the degree to which mental state disorder and personality disorder impact on social functioning in patients engaged in secondary mental health care in New Zealand. Patients were interviewed using peer-reviewed instruments able to provide an indication of severity to assess their social functioning, personality status and diagnosis. Univariate correlations and linear regression was used to identify the association between social functioning, mental state disorder and personality. Using simple correlations all diagnostic categories associated with declines in social functioning. In the regression analysis depression and personality dysfunction accounted for 48% of the variance in social functioning. For patients engaged in secondary care, depression and personality dysfunction are significantly associated with poorer social functioning.

  8. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  9. Estimation of streamflow for selected sites on the Carson and Truckee rivers in California and Nevada, 1944-80

    USGS Publications Warehouse

    Blodgett, J.C.; Oltmann, R.N.; Poeschel, K.R.

    1984-01-01

    Daily mean and monthly discharges were estimated for 10 sites on the Carson and Truckee Rivers for periods of incomplete records and for tributary sites affected by reservoir regulation. On the basis of the hydrologic characteristics, stream-flow data for a water year were grouped by month or season for subsequent regression analysis. In most cases, simple linear regressions adequately defined a relation of streamflow between gaging stations, but in some instances a nonlinear relation for several months of the water year was derived. Statistical data are presented to indicate the reliability of the estimated streamflow data. Records of discharges including historical and estimated data for the gaging stations for the water years 1944-80 are presented. (USGS)

  10. Developing a short form of the simple Rathus assertiveness schedule using a sample of adults with sickle cell disease.

    PubMed

    Jenerette, Coretta; Dixon, Jane

    2010-10-01

    Ethnic and cultural norms influence an individual's assertiveness. In health care, assertiveness may play an important role in health outcomes, especially for predominantly minority populations, such as adults with sickle cell disease. Therefore, it is important to develop measures to accurately assess assertiveness. It is also important to reduce response burden of lengthy instruments while retaining instrument reliability and validity. The purpose of this article is to describe development of a shorter version of the Simple Rathus Assertiveness Schedule (SRAS). Data from a cross-sectional descriptive study of adults with sickle cell disease were used to construct a short form of the SRAS, guided by stepwise regression analysis. The 19-item Simple Rathus Assertiveness Scale-Short Form (SRAS-SF) had acceptable reliability (α = .81) and construct validity and was highly correlated with the SRAS (r = .98, p = .01). The SRAS-SF reduces response burden, while maintaining reliability and validity.

  11. Exploring simple, transparent, interpretable and predictive QSAR models for classification and quantitative prediction of rat toxicity of ionic liquids using OECD recommended guidelines.

    PubMed

    Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A

    2015-11-01

    The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Estimating magnitude and frequency of peak discharges for rural, unregulated, streams in West Virginia

    USGS Publications Warehouse

    Wiley, J.B.; Atkins, John T.; Tasker, Gary D.

    2000-01-01

    Multiple and simple least-squares regression models for the log10-transformed 100-year discharge with independent variables describing the basin characteristics (log10-transformed and untransformed) for 267 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions of the State, designated East, North, and South. Exploratory data analysis procedures identified 31 gaging stations at which discharges are different than would be expected for West Virginia. Regional equations for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak discharges were determined by generalized least-squares regression using data from 236 gaging stations. Log10-transformed drainage area was the most significant independent variable for all regions.Equations developed in this study are applicable only to rural, unregulated, streams within the boundaries of West Virginia. The accuracy of estimating equations is quantified by measuring the average prediction error (from 27.7 to 44.7 percent) and equivalent years of record (from 1.6 to 20.0 years).

  13. [Association of mineral and bone disorder with increasing PWV in CKD 1-5 patients].

    PubMed

    Shiota, Jun; Watanabe, Mitsuhiro

    2007-01-01

    The association between pulse wave velocity(PWV) and chronic kidney disease mineral and bone disorder(CKD-MBD) was investigated in CKD 1-5 patients without dialysis. Pulse pressure(PP), PWV, serum Cr, non-HDL-cholesterol, Alb, Ca, Pi, calcitriol, intact-PTH and BAP were measured in sixty patients not receiving a phosphate binder or vitamin D. Using the relationship between age and baPWV in healthy subjects, we determined delta baPWV(measured baPWV-calculated baPWV) as an index for the effect of CKD-related factors. delta baPWV was significantly higher in diabetic patients (p < 0.00001). Simple regression analysis revealed that delta baPWV was positively correlated with PP (p < 0.05) and Log(intact-PTH) (p < 0.01), but negatively correlated with Log(estimated GFR) and Log(calcitriol) (p < 0.01). Multiple regression analysis revealed that delta baPWV was significantly associated with PP and calcitriol, or PP and intact-PTH. These results suggest a relationship between PWV and CKD-MBD.

  14. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  15. An asymptotic analysis of the laminar-turbulent transition of yield stress fluids in pipes

    NASA Astrophysics Data System (ADS)

    Myers, Tim G.; Mitchell, Sarah L.; Slatter, Paul

    2017-02-01

    The work in this paper concerns the axisymmetric pipe flow of a Herschel-Bulkley fluid, with the aim of determining a relation between the critical velocity (defining the transition between laminar and turbulent flow) and the pipe diameter in terms of the Reynolds number Re 3. The asymptotic behaviour for large and small pipes is examined and simple expressions for the leading order terms are presented. Results are then compared with experimental data. A nonlinear regression analysis shows that for the tested fluids the transition occurs at similar values to the Newtonian case, namely in the range 2100 < Re 3 < 2500.

  16. [Predicting the probability of development and progression of primary open angle glaucoma by regression modeling].

    PubMed

    Likhvantseva, V G; Sokolov, V A; Levanova, O N; Kovelenova, I V

    2018-01-01

    Prediction of the clinical course of primary open-angle glaucoma (POAG) is one of the main directions in solving the problem of vision loss prevention and stabilization of the pathological process. Simple statistical methods of correlation analysis show the extent of each risk factor's impact, but do not indicate the total impact of these factors in personalized combinations. The relationships between the risk factors is subject to correlation and regression analysis. The regression equation represents the dependence of the mathematical expectation of the resulting sign on the combination of factor signs. To develop a technique for predicting the probability of development and progression of primary open-angle glaucoma based on a personalized combination of risk factors by linear multivariate regression analysis. The study included 66 patients (23 female and 43 male; 132 eyes) with newly diagnosed primary open-angle glaucoma. The control group consisted of 14 patients (8 male and 6 female). Standard ophthalmic examination was supplemented with biochemical study of lacrimal fluid. Concentration of matrix metalloproteinase MMP-2 and MMP-9 in tear fluid in both eyes was determined using 'sandwich' enzyme-linked immunosorbent assay (ELISA) method. The study resulted in the development of regression equations and step-by-step multivariate logistic models that can help calculate the risk of development and progression of POAG. Those models are based on expert evaluation of clinical and instrumental indicators of hydrodynamic disturbances (coefficient of outflow ease - C, volume of intraocular fluid secretion - F, fluctuation of intraocular pressure), as well as personalized morphometric parameters of the retina (central retinal thickness in the macular area) and concentration of MMP-2 and MMP-9 in the tear film. The newly developed regression equations are highly informative and can be a reliable tool for studying of the influence vector and assessment of pathogenic potential of the independent risk factors in specific personalized combinations.

  17. [Prevalence of vitamin D deficiency and associated factors in women and newborns in the immediate postpartum period].

    PubMed

    do Prado, Mara Rúbia Maciel Cardoso; Oliveira, Fabiana de Cássia Carvalho; Assis, Karine Franklin; Ribeiro, Sarah Aparecida Vieira; do Prado Junior, Pedro Paulo; Sant'Ana, Luciana Ferreira da Rocha; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2015-01-01

    To assess the prevalence of vitamin D deficiency and its associated factors in women and their newborns in the postpartum period. This cross-sectional study evaluated vitamin D deficiency/insufficiency in 226 women and their newborns in Viçosa (Minas Gerais, BR) between December 2011 and November 2012. Cord blood and venous maternal blood were collected to evaluate the following biochemical parameters: vitamin D, alkaline phosphatase, calcium, phosphorus and parathyroid hormone. Poisson regression analysis, with a confidence interval of 95% was applied to assess vitamin D deficiency and its associated factors. Multiple linear regression analysis was performed to identify factors associated with 25(OH)D deficiency in the newborns and women from the study. The criteria for variable inclusion in the multiple linear regression model was the association with the dependent variable in the simple linear regression analysis, considering p<0.20. Significance level was α<5%. From 226 women included, 200 (88.5%) were 20 to 44 years old; the median age was 28 years. Deficient/insufficient levels of vitamin D were found in 192 (85%) women and in 182 (80.5%) neonates. The maternal 25(OH)D and alkaline phosphatase levels were independently associated with vitamin D deficiency in infants. This study identified a high prevalence of vitamin D deficiency and insufficiency in women and newborns and the association between maternal nutritional status of vitamin D and their infants' vitamin D status. Copyright © 2015 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  18. [Phonological characteristics and rehabilitation training of abnormal velar in children with functional articulation disorders].

    PubMed

    Lina, Xu; Feng, Li; Yanyun, Zhang; Nan, Gao; Mingfang, Hu

    2016-12-01

    To explore the phonological characteristics and rehabilitation training of abnormal velar in patients with functional articulation disorders (FAD). Eighty-seven patients with FAD were observed of the phonological characteristics of velar. Seventy-two patients with abnormal velar accepted speech training. The correlation and simple linear regression analysis were carried out on abnormal velar articulation and age. The articulation disorder of /g/ mainly showed replacement by /d/, /b/ or omission. /k/ mainly showed replacement by /d/, /t/, /g/, /p/, /b/. /h/ mainly showed replacement by /g/, /f/, /p/, /b/ or omission. The common erroneous articulation forms of /g/, /k/, /h/ were fronting of tongue and replacement by bilabial consonants. When velar combined with vowels contained /a/ and /e/, the main error was fronting of tongue. When velar combined with vowels contained /u/, the errors trended to be replacement by bilabial consonants. After 3 to 10 times of speech training, the number of erroneous words decreased to (6.24±2.61) from (40.28±6.08) before the speech training was established, the difference was statistically significant (Z=-7.379, P=0.000). The number of erroneous words was negatively correlated with age (r=-0.691, P=0.000). The result of simple linear regression analysis showed that the determination coefficient was 0.472. The articulation disorder of velar mainly shows replacement, varies with the vowels. The targeted rehabilitation training hereby established is significantly effective. Age plays an important role in the outcome of velar.

  19. Testing of a simplified LED based vis/NIR system for rapid ripeness evaluation of white grape (Vitis vinifera L.) for Franciacorta wine.

    PubMed

    Giovenzana, Valentina; Civelli, Raffaele; Beghi, Roberto; Oberti, Roberto; Guidetti, Riccardo

    2015-11-01

    The aim of this work was to test a simplified optical prototype for a rapid estimation of the ripening parameters of white grape for Franciacorta wine directly in field. Spectral acquisition based on reflectance at four wavelengths (630, 690, 750 and 850 nm) was proposed. The integration of a simple processing algorithm in the microcontroller software would allow to visualize real time values of spectral reflectance. Non-destructive analyses were carried out on 95 grape bunches for a total of 475 berries. Samplings were performed weekly during the last ripening stages. Optical measurements were carried out both using the simplified system and a portable commercial vis/NIR spectrophotometer, as reference instrument for performance comparison. Chemometric analyses were performed in order to extract the maximum useful information from optical data. Principal component analysis (PCA) was performed for a preliminary evaluation of the data. Correlations between the optical data matrix and ripening parameters (total soluble solids content, SSC; titratable acidity, TA) were carried out using partial least square (PLS) regression for spectra and using multiple linear regression (MLR) for data from the simplified device. Classification analysis were also performed with the aim of discriminate ripe and unripe samples. PCA, MLR and classification analyses show the effectiveness of the simplified system in separating samples among different sampling dates and in discriminating ripe from unripe samples. Finally, simple equations for SSC and TA prediction were calculated. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Chair-side detection of Prevotella Intermedia in mature dental plaque by its fluorescence.

    PubMed

    Nomura, Yoshiaki; Takeuchi, Hiroaki; Okamoto, Masaaki; Sogabe, Kaoru; Okada, Ayako; Hanada, Nobuhiro

    2017-06-01

    Prevotella intermedia/nigrescens is one of the well-known pathogens causing periodontal diseases, and the red florescence excited by the visible blue light caused by the protoporphyrin IX in the bacterial cells could be useful for the chair-side detection. The aim of this study was to evaluated levels of periodontal pathogen, especially P. intermedia in clinical samples of red fluorescent dental plaque. Thirty two supra gingival plaque samples from six individuals were measured its fluorescence at 640nm wavelength excited by 409nm. Periodontopathic bacteria were counted by the Invader PLUS PCR assay. Co-relations the fluorescence intensity and bacterial counts were analyzed by Person's correlation coefficient and simple and multiple regression analysis. Positive and negative predictive values of the fluorescence intensities for with or without P. intermedia in supragingival plaque was calculated. When relative fluorescence unit (RFU) were logarithmic transformed, statistically significant linear relations between RFU and bacterial counts were obtained for P. intermedia, Porphyromonas gingivalis and Tannerella forsythia. By the multiple regression analysis, only P. intermedia had statistically significant co-relation with fluorescence intensities. All of the fluorescent dental plaque contained P. intermedia m. In contrast, 28% of non-fluorescent plaques contained P. intermedia. To check the fluorescence dental plaque in the oral cavity could be the simple chair-side screening of the mature dental plaque before examining the periodontal pathogens especially P. intermedia by the PCR method. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Use of admission serum lactate and sodium levels to predict mortality in necrotizing soft-tissue infections.

    PubMed

    Yaghoubian, Arezou; de Virgilio, Christian; Dauphine, Christine; Lewis, Roger J; Lin, Matthew

    2007-09-01

    Simple admission laboratory values can be used to classify patients with necrotizing soft-tissue infection (NSTI) into high and low mortality risk groups. Chart review. Public teaching hospital. All patients with NSTI from 1997 through 2006. Variables analyzed included medical history, admission vital signs, laboratory values, and microbiologic findings. Data analyses included univariate and classification and regression tree analyses. Mortality. One hundred twenty-four patients were identified with NSTI. The overall mortality rate was 21 of 124 (17%). On univariate analysis, factors associated with mortality included a history of cancer (P = .03), intravenous drug abuse (P < .001), low systolic blood pressure on admission (P = .03), base deficit (P = .009), and elevated white blood cell count (P = .06). On exploratory classification and regression tree analysis, admission serum lactate and sodium levels were predictors of mortality, with a sensitivity of 100%, specificity of 28%, positive predictive value of 23%, and negative predictive value of 100%. A serum lactate level greater than or equal to 54.1 mg/dL (6 mmol/L) alone was associated with a 32% mortality, whereas a serum sodium level greater than or equal to 135 mEq/L combined with a lactate level less than 54.1 mg/dL was associated with a mortality of 0%. Mortality for NSTIs remains high. A simple model, using admission serum lactate and serum sodium levels, may help identify patients at greatest risk for death.

  2. Temporal trends in sperm count: a systematic review and meta-regression analysis.

    PubMed

    Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson; Mendiola, Jaime; Weksler-Derri, Dan; Mindlis, Irina; Pinotti, Rachel; Swan, Shanna H

    2017-11-01

    Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. To provide a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. PubMed/MEDLINE and EMBASE were searched for English language studies of human SC published in 1981-2013. Following a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years of sample collection and covariates [fertility group ('Unselected by fertility' versus 'Fertile'), geographic group ('Western', including North America, Europe Australia and New Zealand versus 'Other', including South America, Asia and Africa), age, ejaculation abstinence time, semen collection method, method of measuring SC and semen volume, exclusion criteria and indicators of completeness of covariate data]. The slopes of SC and TSC were estimated as functions of sample collection year using both simple linear regression and weighted meta-regression models and the latter were adjusted for pre-determined covariates and modification by fertility and geographic group. Assumptions were examined using multiple sensitivity analyses and nonlinear models. SC declined significantly between 1973 and 2011 (slope in unadjusted simple regression models -0.70 million/ml/year; 95% CI: -0.72 to -0.69; P < 0.001; slope in adjusted meta-regression models = -0.64; -1.06 to -0.22; P = 0.003). The slopes in the meta-regression model were modified by fertility (P for interaction = 0.064) and geographic group (P for interaction = 0.027). There was a significant decline in SC between 1973 and 2011 among Unselected Western (-1.38; -2.02 to -0.74; P < 0.001) and among Fertile Western (-0.68; -1.31 to -0.05; P = 0.033), while no significant trends were seen among Unselected Other and Fertile Other. Among Unselected Western studies, the mean SC declined, on average, 1.4% per year with an overall decline of 52.4% between 1973 and 2011. Trends for TSC and SC were similar, with a steep decline among Unselected Western (-5.33 million/year, -7.56 to -3.11; P < 0.001), corresponding to an average decline in mean TSC of 1.6% per year and overall decline of 59.3%. Results changed minimally in multiple sensitivity analyses, and there was no statistical support for the use of a nonlinear model. In a model restricted to data post-1995, the slope both for SC and TSC among Unselected Western was similar to that for the entire period (-2.06 million/ml, -3.38 to -0.74; P = 0.004 and -8.12 million, -13.73 to -2.51, P = 0.006, respectively). This comprehensive meta-regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because of the significant public health implications of these results, research on the causes of this continuing decline is urgently needed. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  4. Missing data imputation: focusing on single imputation.

    PubMed

    Zhang, Zhongheng

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.

  5. Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beggs, W.J.

    1981-02-01

    This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; themore » analysis of variance; quality control procedures; and linear regression analysis.« less

  6. Missing data imputation: focusing on single imputation

    PubMed Central

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations. PMID:26855945

  7. The extended Lennard-Jones potential energy function: A simpler model for direct-potential-fit analysis

    NASA Astrophysics Data System (ADS)

    Hajigeorgiou, Photos G.

    2016-12-01

    An analytical model for the diatomic potential energy function that was recently tested as a universal function (Hajigeorgiou, 2010) has been further modified and tested as a suitable model for direct-potential-fit analysis. Applications are presented for the ground electronic states of three diatomic molecules: oxygen, carbon monoxide, and hydrogen fluoride. The adjustable parameters of the extended Lennard-Jones potential model are determined through nonlinear regression by fits to calculated rovibrational energy term values or experimental spectroscopic line positions. The model is shown to lead to reliable, compact and simple representations for the potential energy functions of these systems and could therefore be classified as a suitable and attractive model for direct-potential-fit analysis.

  8. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  9. A Simple Introduction to Moving Least Squares and Local Regression Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao Veerabhadra

    In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).

  10. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  11. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  12. Models for forecasting hospital bed requirements in the acute sector.

    PubMed Central

    Farmer, R D; Emami, J

    1990-01-01

    STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253

  13. Simple and Multivariate Relationships Between Spiritual Intelligence with General Health and Happiness.

    PubMed

    Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud

    2016-08-01

    The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.

  14. Participation and Performance Trends in Triple Iron Ultra-triathlon – a Cross-sectional and Longitudinal Data Analysis

    PubMed Central

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Purpose The aims of the present study were to investigate (i) the changes in participation and performance and (ii) the gender difference in Triple Iron ultra-triathlon (11.4 km swimming, 540 km cycling and 126.6 km running) across years from 1988 to 2011. Methods For the cross-sectional data analysis, the association between with overall race times and split times was investigated using simple linear regression analyses and analysis of variance. For the longitudinal data analysis, the changes in race times for the five men and women with the highest number of participations were analysed using simple linear regression analyses. Results During the studied period, the number of finishers were 824 (71.4%) for men and 80 (78.4%) for women. Participation increased for men (r 2=0.27, P<0.01) while it remained stable for women (8%). Total race times were 2,146 ± 127.3 min for men and 2,615 ± 327.2 min for women (P<0.001). Total race time decreased for men (r 2=0.17; P=0.043), while it increased for women (r 2=0.49; P=0.001) across years. The gender difference in overall race time for winners increased from 10% in 1992 to 42% in 2011 (r 2=0.63; P<0.001). The longitudinal analysis of the five women and five men with the highest number of participations showed that performance decreased in one female (r 2=0.45; P=0.01). The four other women as well as all five men showed no change in overall race times across years. Conclusions Participation increased and performance improved for male Triple Iron ultra-triathletes while participation remained unchanged and performance decreased for females between 1988 and 2011. The reasons for the increase of the gap between female and male Triple Iron ultra-triathletes need further investigations. PMID:23012633

  15. [Correlation between percentage of body fat and simple anthropometric parameters in children aged 6-9 years in Guangzhou].

    PubMed

    Yan, H C; Hao, Y T; Guo, Y F; Wei, Y H; Zhang, J H; Huang, G P; Mao, L M; Zhang, Z Q

    2017-11-10

    Objective: To evaluate the accuracy of simple anthropometric parameters in diagnosing obesity in children in Guangzhou. Methods: A cross-sectional study, including 465 children aged 6-9 years, was carried out in Guangzhou. Their body height and weight, waist circumference (WC) and hip circumference were measured according to standard procedure. Body mass index (BMI), waist to hip ratio (WHR) and waist-to-height ratio (WHtR) were calculated. Body fat percentage (BF%) was determined by dual-energy X-ray absorptiometry. Multiple regression analysis was applied to evaluate the correlations between age-adjusted physical indicators and BF%, after the adjustment for age. Obesity was defined by BF%. Receiver operating characteristic (ROC) curve analyses were performed to assess the diagnostic accuracy of the indicators for childhood obesity. Area under-ROC curves (AUCs) were calculated and the best cut-off point that maximizing 'sensitivity + specificity-1' was determined. Results: BMI showed the strongest association with BF% through multiple regression analysis. For 'per-standard deviation increase' of BMI, BF% increased by 5.3% ( t =23.1, P <0.01) in boys and 4.6% ( t =17.5, P <0.01) in girls, respectively. The ROC curve analysis indicated that BMI exhibited the largest AUC in both boys (AUC=0.908) and girls (AUC=0.895). The sensitivity was 80.8% in boys and 81.8% in girls, and the specificity was 88.2% in boys and 87.1% in girls. Both the AUCs for WHtR and WC were less than 0.8 in boys and girls. WHR had the smallest AUCs (<0.8) in both boys and girls. Conclusion: BMI appeared to be a good predicator for BF% in children aged 6-9 years in Guangzhou.

  16. Satisfaction of active duty soldiers with family dental care.

    PubMed

    Chisick, M C

    1997-02-01

    In the fall of 1992, a random, worldwide sample of 6,442 married and single parent soldiers completed a self-administered survey on satisfaction with 22 attributes of family dental care. Simple descriptive statistics for each attribute were derived, as was a composite overall satisfaction score using factor analysis. Composite scores were regressed on demographics, annual dental utilization, and access barriers to identify those factors having an impact on a soldier's overall satisfaction with family dental care. Separate regression models were constructed for single parents, childless couples, and couples with children. Results show below-average satisfaction with nearly all attributes of family dental care, with access attributes having the lowest average satisfaction scores. Factors influencing satisfaction with family dental care varied by family type with one exception: dependent dental utilization within the past year contributed positively to satisfaction across all family types.

  17. Study of relationship between clinical factors and velopharyngeal closure in cleft palate patients

    PubMed Central

    Chen, Qi; Zheng, Qian; Shi, Bing; Yin, Heng; Meng, Tian; Zheng, Guang-ning

    2011-01-01

    BACKGROUND: This study was carried out to analyze the relationship between clinical factors and velopharyngeal closure (VPC) in cleft palate patients. METHODS: Chi-square test was used to compare the postoperative velopharyngeal closure rate. Logistic regression model was used to analyze independent variables associated with velopharyngeal closure. RESULTS: Difference of postoperative VPC rate in different cleft types, operative ages and surgical techniques was significant (P=0.000). Results of logistic regression analysis suggested that when operative age was beyond deciduous dentition stage, or cleft palate type was complete, or just had undergone a simple palatoplasty without levator veli palatini retropositioning, patients would suffer a higher velopharyngeal insufficiency rate after primary palatal repair. CONCLUSIONS: Cleft type, operative age and surgical technique were the contributing factors influencing VPC rate after primary palatal repair of cleft palate patients. PMID:22279464

  18. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  19. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  20. The Effect of Education on Old Age Cognitive Abilities: Evidence from a Regression Discontinuity Design*

    PubMed Central

    Banks, James; Mazzonna, Fabrizio

    2011-01-01

    In this paper we exploit the 1947 change to the minimum school-leaving age in England from 14 to 15, to evaluate the causal effect of a year of education on cognitive abilities at older ages. We use a regression discontinuity design analysis and find a large and significant effect of the reform on males’ memory and executive functioning at older ages, using simple cognitive tests from the English Longitudinal Survey on Ageing (ELSA) as our outcome measures. This result is particularly remarkable since the reform had a powerful and immediate effect on about half the population of 14-year-olds. We investigate and discuss the potential channels by which this reform may have had its effects, as well as carrying out a full set of sensitivity analyses and robustness checks. PMID:22611283

  1. A simple algorithm for the identification of clinical COPD phenotypes.

    PubMed

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; Ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R; Casanova, Ciro; de-Torres, Juan P; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D; Sobradillo, Patricia; Soler-Cataluña, Juan J; Turner, Alice M; Verdu Rivera, Francisco Javier; Soriano, Joan B; Roche, Nicolas

    2017-11-01

    This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses.Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of subgroups, for which clinical relevance was determined by comparing 3-year all-cause mortality. Classification and regression trees (CARTs) were used to develop an algorithm for allocating patients to these subgroups. This algorithm was tested in 3651 patients from the COPD Cohorts Collaborative International Assessment (3CIA) initiative.Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated that the variables relevant for patient grouping differed markedly between patients with isolated respiratory disease (FEV 1 , dyspnoea grade) and those with multi-morbidity (dyspnoea grade, age, FEV 1 and body mass index). Application of this algorithm to the 3CIA cohorts confirmed that it identified subgroups of patients with different clinical characteristics, mortality rates (median, from 4% to 27%) and age at death (median, from 68 to 76 years).A simple algorithm, integrating respiratory characteristics and comorbidities, allowed the identification of clinically relevant COPD phenotypes. Copyright ©ERS 2017.

  2. Simple Food Group Diversity Indicators Predict Micronutrient Adequacy of Women’s Diets in 5 Diverse, Resource-Poor Settings1234567

    PubMed Central

    Arimond, Mary; Wiesmann, Doris; Becquey, Elodie; Carriquiry, Alicia; Daniels, Melissa C.; Deitchler, Megan; Fanou-Fogny, Nadia; Joseph, Maria L.; Kennedy, Gina; Martin-Prevel, Yves; Torheim, Liv Elin

    2010-01-01

    Women of reproductive age living in resource-poor settings are at high risk of inadequate micronutrient intakes when diets lack diversity and are dominated by staple foods. Yet comparative information on diet quality is scarce and quantitative data on nutrient intakes is expensive and difficult to gather. We assessed the potential of simple indicators of dietary diversity, such as could be generated from large household surveys, to serve as proxy indicators of micronutrient adequacy for population-level assessment. We used 5 existing data sets (from Burkina Faso, Mali, Mozambique, Bangladesh, and the Philippines) with repeat 24-h recalls to construct 8 candidate food group diversity indicators (FGI) and to calculate the mean probability of adequacy (MPA) for 11 micronutrients. FGI varied in food group disaggregation and in minimum consumption required for a food group to count. There were large gaps between intakes and requirements across a range of micronutrients in each site. All 8 FGI were correlated with MPA in all sites; regression analysis confirmed that associations remained when controlling for energy intake. Assessment of dichotomous indicators through receiver-operating characteristic analysis showed moderate predictive strength for the best choice indicators, which varied by site. Simple FGI hold promise as proxy indicators of micronutrient adequacy. PMID:20881077

  3. [Determination of Bloodstain Age by UV Visible Integrating Sphere Reflection Spectrum].

    PubMed

    Yan, L Q; Gao, Y

    2016-10-01

    To establish a method for rapid identification of bloodstain age. Under laboratory conditions (20 ℃, 25 ℃ and 30 ℃), an integrating sphere ISR-240A was used as a reflection accessory on an UV-2450 UV-vis spectrophotometer, and a standard white board of BaSO₄ was used as reference, the reflection spectrums of bloodstain from human ears' venous blood were measured at regular intervals. The reflection radios R ₅₄₁ and R ₅₇₇ at a specific wavelength were collected and the value of R ₅₄₁/ R ₅₇₇ was calculated. The linear fitting and regression analysis were done by SPSS 17.0. The results of regression analysis showed that R ² of the ratios of bloodstain age to UV visible reflectivity in specific wavelengths were larger than 0.8 within 8 hours and under certain circumstances. The regression equation was established. The bloodstain age had significant correlation with the value of R ₅₄₁/ R ₅₇₇. The method of inspection is simple, rapid and nondestructive with a good reliability, and can be used to identify the bloodstain age within 8 hours elapsed-time standards under laboratory conditions. Copyright© by the Editorial Department of Journal of Forensic Medicine

  4. Estimating soil temperature using neighboring station data via multi-nonlinear regression and artificial neural network models.

    PubMed

    Bilgili, Mehmet; Sahin, Besir; Sangun, Levent

    2013-01-01

    The aim of this study is to estimate the soil temperatures of a target station using only the soil temperatures of neighboring stations without any consideration of the other variables or parameters related to soil properties. For this aim, the soil temperatures were measured at depths of 5, 10, 20, 50, and 100 cm below the earth surface at eight measuring stations in Turkey. Firstly, the multiple nonlinear regression analysis was performed with the "Enter" method to determine the relationship between the values of target station and neighboring stations. Then, the stepwise regression analysis was applied to determine the best independent variables. Finally, an artificial neural network (ANN) model was developed to estimate the soil temperature of a target station. According to the derived results for the training data set, the mean absolute percentage error and correlation coefficient ranged from 1.45% to 3.11% and from 0.9979 to 0.9986, respectively, while corresponding ranges of 1.685-3.65% and 0.9988-0.9991, respectively, were obtained based on the testing data set. The obtained results show that the developed ANN model provides a simple and accurate prediction to determine the soil temperature. In addition, the missing data at the target station could be determined within a high degree of accuracy.

  5. Ridge Regression for Interactive Models.

    ERIC Educational Resources Information Center

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…

  6. Additivity of nonlinear biomass equations

    Treesearch

    Bernard R. Parresol

    2001-01-01

    Two procedures that guarantee the property of additivity among the components of tree biomass and total tree biomass utilizing nonlinear functions are developed. Procedure 1 is a simple combination approach, and procedure 2 is based on nonlinear joint-generalized regression (nonlinear seemingly unrelated regressions) with parameter restrictions. Statistical theory is...

  7. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  8. Resting electrocardiogram and stress myocardial perfusion imaging in the determination of left ventricular systolic function: an assessment enhancing the performance of gated SPET.

    PubMed

    Moralidis, Efstratios; Spyridonidis, Tryfon; Arsos, Georgios; Skeberis, Vassilios; Anagnostopoulos, Constantinos; Gavrielidis, Stavros

    2010-01-01

    This study aimed to determine systolic dysfunction and estimate resting left ventricular ejection fraction (LVEF) from information collected during routine evaluation of patients with suspected or known coronary heart disease. This approach was then compared to gated single photon emission tomography (SPET). Patients having undergone stress (201)Tl myocardial perfusion imaging followed by equilibrium radionuclide angiography (ERNA) were separated into derivation (n=954) and validation (n=309) groups. Logistic regression analysis was used to develop scoring systems, containing clinical, electrocardiographic (ECG) and scintigraphic data, for the discrimination of an ERNA-LVEF<0.50. Linear regression analysis provided equations predicting ERNA-LVEF from those scores. In 373 patients LVEF was also assessed with (201)Tl gated SPET. Our results showed that an ECG-Scintigraphic scoring system was the best simple predictor of an ERNA-LVEF<0.50 in comparison to other models including ECG, clinical and scintigraphic variables in both the derivation and validation subpopulations. A simple linear equation was derived also for the assessment of resting LVEF from the ECG-Scintigraphic model. Equilibrium radionuclide angiography-LVEF had a good correlation with the ECG-Scintigraphic model LVEF (r=0.716, P=0.000), (201)Tl gated SPET LVEF (r=0.711, P=0.000) and the average LVEF from those assessments (r=0.796, P=0.000). The Bland-Altman statistic (mean+/-2SD) provided values of 0.001+/-0.176, 0.071+/-0.196 and 0.040+/-0.152, respectively. The average LVEF was a better discriminator of systolic dysfunction than gated SPET-LVEF in receiver operating characteristic (ROC) analysis and identified more patients (89%) with a

  9. Maintenance energy requirements in miniature colony dogs.

    PubMed

    Serisier, S; Weber, M; Feugier, A; Fardet, M-O; Garnier, F; Biourge, V; German, A J

    2013-05-01

    There are numerous reports of maintenance energy requirements (MER) in dogs, but little information is available about energy requirements of miniature dog breeds. In this prospective, observational, cohort study, we aimed to determine MER in dogs from a number of miniature breeds and to determine which factors were associated with it. Forty-two dogs participated in the study. MER was calculated by determining daily energy intake (EI) during a period of 196 days (28-359 days) when body weight did not change significantly (e.g. ±2% in 12 weeks). Estimated median MER was 473 kJ/kg(0.75) /day (285-766 kJ/kg(0.75) /day), that is, median 113 kcal/kg(0.75) /day (68-183 kcal/kg(0.75) /day). In the obese dogs that lost weight, median MER after weight loss was completed was 360 kJ/kg(0.75) /day (285-515 kJ/kg(0.75) /day), that is, 86 kcal/kg(0.75) /day, (68-123 kcal/kg(0.75) /day). Simple linear regression analysis suggested that three breeds (e.g. Chihuahua, p = 0.002; Yorkshire terrier, p = 0.039; dachshund, p = 0.035) had an effect on MER. In addition to breed, simple linear regression revealed that neuter status (p = 0.079) and having previously been overweight (p = 0.002) were also of significance. However, with multiple linear regression analysis, only previous overweight status (MER less in dogs previously overweight p = 0.008) and breed (MER greater in Yorkshire terriers [p = 0.029] and less in Chihuahuas [p = 0.089]) remained in the final model. This study is the first to estimate MER in dogs of miniature breeds. Although further information from pet dogs is now needed, the current work will be useful for setting energy and nutrient requirement in such dogs for the future. Journal of Animal Physiology and Animal Nutrition © 2013 Blackwell Verlag GmbH.

  10. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Logits and Tigers and Bears, Oh My! A Brief Look at the Simple Math of Logistic Regression and How It Can Improve Dissemination of Results

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2012-01-01

    Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These…

  12. A Method for Assessing the Quality of Model-Based Estimates of Ground Temperature and Atmospheric Moisture Using Satellite Data

    NASA Technical Reports Server (NTRS)

    Wu, Man Li C.; Schubert, Siegfried; Lin, Ching I.; Stajner, Ivanka; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A method is developed for validating model-based estimates of atmospheric moisture and ground temperature using satellite data. The approach relates errors in estimates of clear-sky longwave fluxes at the top of the Earth-atmosphere system to errors in geophysical parameters. The fluxes include clear-sky outgoing longwave radiation (CLR) and radiative flux in the window region between 8 and 12 microns (RadWn). The approach capitalizes on the availability of satellite estimates of CLR and RadWn and other auxiliary satellite data, and multiple global four-dimensional data assimilation (4-DDA) products. The basic methodology employs off-line forward radiative transfer calculations to generate synthetic clear-sky longwave fluxes from two different 4-DDA data sets. Simple linear regression is used to relate the clear-sky longwave flux discrepancies to discrepancies in ground temperature ((delta)T(sub g)) and broad-layer integrated atmospheric precipitable water ((delta)pw). The slopes of the regression lines define sensitivity parameters which can be exploited to help interpret mismatches between satellite observations and model-based estimates of clear-sky longwave fluxes. For illustration we analyze the discrepancies in the clear-sky longwave fluxes between an early implementation of the Goddard Earth Observing System Data Assimilation System (GEOS2) and a recent operational version of the European Centre for Medium-Range Weather Forecasts data assimilation system. The analysis of the synthetic clear-sky flux data shows that simple linear regression employing (delta)T(sub g)) and broad layer (delta)pw provides a good approximation to the full radiative transfer calculations, typically explaining more thin 90% of the 6 hourly variance in the flux differences. These simple regression relations can be inverted to "retrieve" the errors in the geophysical parameters, Uncertainties (normalized by standard deviation) in the monthly mean retrieved parameters range from 7% for (delta)T(sub g) to approx. 20% for the lower tropospheric moisture between 500 hPa and surface. The regression relationships developed from the synthetic flux data, together with CLR and RadWn observed with the Clouds and Earth Radiant Energy System instrument, ire used to assess the quality of the GEOS2 T(sub g) and pw. Results showed that the GEOS2 T(sub g) is too cold over land, and pw in upper layers is too high over the tropical oceans and too low in the lower atmosphere.

  13. Indiana chronic disease management program risk stratification analysis.

    PubMed

    Li, Jingjin; Holmes, Ann M; Rosenman, Marc B; Katz, Barry P; Downs, Stephen M; Murray, Michael D; Ackermann, Ronald T; Inui, Thomas S

    2005-10-01

    The objective of this study was to compare the ability of risk stratification models derived from administrative data to classify groups of patients for enrollment in a tailored chronic disease management program. This study included 19,548 Medicaid patients with chronic heart failure or diabetes in the Indiana Medicaid data warehouse during 2001 and 2002. To predict costs (total claims paid) in FY 2002, we considered candidate predictor variables available in FY 2001, including patient characteristics, the number and type of prescription medications, laboratory tests, pharmacy charges, and utilization of primary, specialty, inpatient, emergency department, nursing home, and home health care. We built prospective models to identify patients with different levels of expenditure. Model fit was assessed using R statistics, whereas discrimination was assessed using the weighted kappa statistic, predictive ratios, and the area under the receiver operating characteristic curve. We found a simple least-squares regression model in which logged total charges in FY 2002 were regressed on the log of total charges in FY 2001, the number of prescriptions filled in FY 2001, and the FY 2001 eligibility category, performed as well as more complex models. This simple 3-parameter model had an R of 0.30 and, in terms in classification efficiency, had a sensitivity of 0.57, a specificity of 0.90, an area under the receiver operator curve of 0.80, and a weighted kappa statistic of 0.51. This simple model based on readily available administrative data stratified Medicaid members according to predicted future utilization as well as more complicated models.

  14. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    NASA Astrophysics Data System (ADS)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  15. Predicting the need for muscle flap salvage after open groin vascular procedures: a clinical assessment tool.

    PubMed

    Fischer, John P; Nelson, Jonas A; Shang, Eric K; Wink, Jason D; Wingate, Nicholas A; Woo, Edward Y; Jackson, Benjamin M; Kovach, Stephen J; Kanchwala, Suhail

    2014-12-01

    Groin wound complications after open vascular surgery procedures are common, morbid, and costly. The purpose of this study was to generate a simple, validated, clinically usable risk assessment tool for predicting groin wound morbidity after infra-inguinal vascular surgery. A retrospective review of consecutive patients undergoing groin cutdowns for femoral access between 2005-2011 was performed. Patients necessitating salvage flaps were compared to those who did not, and a stepwise logistic regression was performed and validated using a bootstrap technique. Utilising this analysis, a simplified risk score was developed to predict the risk of developing a wound which would necessitate salvage. A total of 925 patients were included in the study. The salvage flap rate was 11.2% (n = 104). Predictors determined by logistic regression included prior groin surgery (OR = 4.0, p < 0.001), prosthetic graft (OR = 2.7, p < 0.001), coronary artery disease (OR = 1.8, p = 0.019), peripheral arterial disease (OR = 5.0, p < 0.001), and obesity (OR = 1.7, p = 0.039). Based upon the respective logistic coefficients, a simplified scoring system was developed to enable the preoperative risk stratification regarding the likelihood of a significant complication which would require a salvage muscle flap. The c-statistic for the regression demonstrated excellent discrimination at 0.89. This study presents a simple, internally validated risk assessment tool that accurately predicts wound morbidity requiring flap salvage in open groin vascular surgery patients. The preoperatively high-risk patient can be identified and selectively targeted as a candidate for a prophylactic muscle flap.

  16. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Moon, T.J.; Howell, J.R.

    This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less

  18. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    PubMed

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  19. The impact of energy, agriculture, macroeconomic and human-induced indicators on environmental pollution: evidence from Ghana.

    PubMed

    Asumadu-Sarkodie, Samuel; Owusu, Phebe Asantewaa

    2017-03-01

    In this study, the impact of energy, agriculture, macroeconomic and human-induced indicators on environmental pollution from 1971 to 2011 is investigated using the statistically inspired modification of partial least squares (SIMPLS) regression model. There was evidence of a linear relationship between energy, agriculture, macroeconomic and human-induced indicators and carbon dioxide emissions. Evidence from the SIMPLS regression shows that a 1% increase in crop production index will reduce carbon dioxide emissions by 0.71%. Economic growth increased by 1% will reduce carbon dioxide emissions by 0.46%, which means that an increase in Ghana's economic growth may lead to a reduction in environmental pollution. The increase in electricity production from hydroelectric sources by 1% will reduce carbon dioxide emissions by 0.30%; thus, increasing renewable energy sources in Ghana's energy portfolio will help mitigate carbon dioxide emissions. Increasing enteric emissions by 1% will increase carbon dioxide emissions by 4.22%, and a 1% increase in the nitrogen content of manure management will increase carbon dioxide emissions by 6.69%. The SIMPLS regression forecasting exhibited a 5% MAPE from the prediction of carbon dioxide emissions.

  20. Role of anthropometric data in the prediction of 4-stranded hamstring graft size in anterior cruciate ligament reconstruction.

    PubMed

    Ho, Sean Wei Loong; Tan, Teong Jin Lester; Lee, Keng Thiam

    2016-03-01

    To evaluate whether pre-operative anthropometric data can predict the optimal diameter and length of hamstring tendon autograft for anterior cruciate ligament (ACL) reconstruction. This was a cohort study that involved 169 patients who underwent single-bundle ACL reconstruction (single surgeon) with 4-stranded MM Gracilis and MM Semi-Tendinosus autografts. Height, weight, body mass index (BMI), gender, race, age and -smoking status were recorded pre-operatively. Intra-operatively, the diameter and functional length of the 4-stranded autograft was recorded. Multiple regression analysis was used to determine the relationship between the anthropometric measurements and the length and diameter of the implanted autografts. The strongest correlation between 4-stranded hamstring autograft diameter was height and weight. This correlation was stronger in females than males. BMI had a moderate correlation with the diameter of the graft in females. Females had a significantly smaller graft both in diameter and length when compared with males. Linear regression models did not show any significant correlation between hamstring autograft length with height and weight (p>0.05). Simple regression analysis demonstrated that height and weight can be used to predict hamstring graft diameter. The following regression equation was obtained for females: Graft diameter=0.012+0.034*Height+0.026*Weight (R2=0.358, p=0.004) The following regression equation was obtained for males: Graft diameter=5.130+0.012*Height+0.007*Weight (R2=0.086, p=0.002). Pre-operative anthropometric data has a positive correlation with the diameter of 4 stranded hamstring autografts but no significant correlation with the length. This data can be utilised to predict the autograft diameter and may be useful for pre-operative planning and patient counseling for graft selection.

  1. An integrated Gaussian process regression for prediction of remaining useful life of slow speed bearings based on acoustic emission

    NASA Astrophysics Data System (ADS)

    Aye, S. A.; Heyns, P. S.

    2017-02-01

    This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.

  2. Assessing the effect of a partly unobserved, exogenous, binary time-dependent covariate on survival probabilities using generalised pseudo-values.

    PubMed

    Pötschger, Ulrike; Heinzl, Harald; Valsecchi, Maria Grazia; Mittlböck, Martina

    2018-01-19

    Investigating the impact of a time-dependent intervention on the probability of long-term survival is statistically challenging. A typical example is stem-cell transplantation performed after successful donor identification from registered donors. Here, a suggested simple analysis based on the exogenous donor availability status according to registered donors would allow the estimation and comparison of survival probabilities. As donor search is usually ceased after a patient's event, donor availability status is incompletely observed, so that this simple comparison is not possible and the waiting time to donor identification needs to be addressed in the analysis to avoid bias. It is methodologically unclear, how to directly address cumulative long-term treatment effects without relying on proportional hazards while avoiding waiting time bias. The pseudo-value regression technique is able to handle the first two issues; a novel generalisation of this technique also avoids waiting time bias. Inverse-probability-of-censoring weighting is used to account for the partly unobserved exogenous covariate donor availability. Simulation studies demonstrate unbiasedness and satisfying coverage probabilities of the new method. A real data example demonstrates that study results based on generalised pseudo-values have a clear medical interpretation which supports the clinical decision making process. The proposed generalisation of the pseudo-value regression technique enables to compare survival probabilities between two independent groups where group membership becomes known over time and remains partly unknown. Hence, cumulative long-term treatment effects are directly addressed without relying on proportional hazards while avoiding waiting time bias.

  3. Circulating CD34-Positive Cells Are Associated with Handgrip Strength in Japanese Older Men: The Nagasaki Islands Study.

    PubMed

    Yamanashi, H; Shimizu, Y; Koyamatsu, J; Nagayoshi, M; Kadota, K; Tamai, M; Maeda, T

    2017-01-01

    Handgrip strength is a simple measurement of overall muscular strength and is used to detect sarcopenia. It also predicts adverse events in later life. Many mechanisms of sarcopenia development have been reported. A hypertensive status impairs endothelial dysfunction, which might deteriorate skeletal muscle if vascular angiogenesis is not maintained. This study investigated muscle strength and circulating CD34-positive cells as a marker of vascular angiogenesis. Cross-sectional study. 262 male Japanese community dwellers aged 60 to 69 years. The participants' handgrip strength, medical history, and blood samples were taken. We stratified the participants by hypertensive status to investigate the association between handgrip strength and circulating CD34-positive cells according to hypertensive status. Pearson correlation and linear regression analyses were used. In the Pearson correlation analysis, handgrip strength and the logarithm of circulating CD34-positive cells were significantly associated in hypertensive participants (r=0.22, p=0.021), but not in non-hypertensive participants (r=-0.01, p=0.943). This relationship was only significant in hypertensive participants (ß=1.94, p=0.021) in the simple linear regression analysis, and it remained significant after adjusting for classic cardiovascular risk factors (ß=1.92, p=0.020). The relationship was not significant in non-hypertensive participants (ß=-0.09, p=0.903). We found a positive association between handgrip strength and circulating CD34-positive cells in hypertensive men. Vascular maintenance attributed by circulating CD34-positive cells is thought to be a background mechanism of this association after hypertension-induced vascular injury in skeletal muscle.

  4. The influence of social capital towards the quality of community tourism services in Lake Toba Parapat North Sumatera

    NASA Astrophysics Data System (ADS)

    Revida, Erika; Yanti Siahaan, Asima; Purba, Sukarman

    2018-03-01

    The objective of the research was to analyze the influence of social capital towards the quality of community tourism service In Lake Toba Parapat North Sumatera. The method used the combination between quantitative and qualitative research. Sample was taken from the Community in the area around Lake Toba Parapat North Sumatera with sample of 150 head of the family. The sampling technique was Simple Random Sampling. Data collection techniques used documentary studies, questionnaires, interview and observations, while the data analysis used were Product Moment and Simple Linear Regression analysis. The results of the research showed that there were positive and significant influence between Social Capital and the Quality of Community Tourism Services in Lake Toba Parapat North Sumatera. This research recommend the need to enhance Social Capital such as trust, norms and network and the quality of community tourism services such as Tangibles, Reliability, Responsiveness, Assurance, and Empathy by giving communications, information and education continuously from the families, institutions formal and informal, community leaders, religious figures and all communities in Lake Toba Parapat North Sumatera.

  5. Tuning Sensory Properties of Triazole-Conjugated Spiropyrans: Metal-Ion Selectivity and Paper-Based Colorimetric Detection of Cyanide

    PubMed Central

    Lee, Juhyen; Choi, Eun Jung; Kim, Inwon; Lee, Minhe; Satheeshkumar, Chinnadurai; Song, Changsik

    2017-01-01

    Tuning the sensing properties of spiropyrans (SPs), which are one of the photochromic molecules useful for colorimetric sensing, is important for efficient analysis, but their synthetic modification is not always simple. Herein, we introduce an alkyne-functionalized SP, the modification of which would be easily achieved via Cu-catalyzed azide-alkyne cycloaddition (“click reaction”). The alkyne-SP was conjugated with a bis(triethylene glycol)-benzyl group (EG-BtSP) or a simple benzyl group (BtSP), forming a triazole linkage from the click reaction. The effects of auxiliary groups to SP were tested on metal-ion sensing and cyanide detection. We found that EG-BtSP was more Ca2+-sensitive than BtSP in acetonitrile, which were thoroughly examined by a continuous variation method (Job plot) and UV-VIS titrations, followed by non-linear regression analysis. Although both SPs showed similar, selective responses to cyanide in a water/acetonitrile co-solvent, only EG-BtSP showed a dramatic color change when fabricated on paper, highlighting the important contributions of the auxiliary groups. PMID:28783127

  6. Serum resistin is associated with the severity of microangiopathies in type 2 diabetes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osawa, Haruhiko; Ochi, Masaaki; Kato, Kenichi

    2007-04-06

    Resistin, secreted from adipocytes, causes insulin resistance and diabetes in rodents. To determine the relation between serum resistin and diabetic microangiopathies in humans, we analyzed 238 Japanese T2DM subjects. Mean serum resistin was higher in subjects with either advanced retinopathy (preproliferative or proliferative) (P = 0.0130), advanced nephropathy (stage III or IV) (P = 0.0151), or neuropathy (P = 0.0013). Simple regression analysis showed that serum resistin was positively correlated with retinopathy stage (P = 0.0212), nephropathy stage (P = 0.0052), and neuropathy (P = 0.0013). Multiple regression analysis adjusted for age, gender, and BMI, revealed that serum resistin wasmore » correlated with retinopathy stage (P = 0.0144), nephropathy stage (P = 0.0111), and neuropathy (P = 0.0053). Serum resistin was positively correlated with the number of advanced microangiopathies, independent of age, gender, BMI, and either the duration of T2DM (P = 0.0318) or serum creatinine (P = 0.0092). Therefore, serum resistin was positively correlated with the severity of microangiopathies in T2DM.« less

  7. Nucleated red blood cells in growth-restricted fetuses: associations with short-term neonatal outcome.

    PubMed

    Minior, V K; Bernstein, P S; Divon, M Y

    2000-01-01

    To determine the utility of the neonatal nucleated red blood cell (NRBC) count as an independent predictor of short-term perinatal outcome in growth-restricted fetuses. Hospital charts of neonates with a discharge diagnosis indicating a birth weight <10th percentile were reviewed for perinatal outcome. We studied all eligible neonates who had a complete blood count on the first day of life. After multiple gestations, anomalous fetuses and diabetic pregnancies were excluded; 73 neonates comprised the study group. Statistical analysis included ANOVA, simple and stepwise regression. Elevated NRBC counts were significantly associated with cesarean section for non-reassuring fetal status, neonatal intensive care unit admission and duration of neonatal intensive care unit stay, respiratory distress and intubation, thrombocytopenia, hyperbilirubinemia, intraventricular hemorrhage and neonatal death. Stepwise regression analysis including gestational age at birth, birth weight and NRBC count demonstrated that in growth-restricted fetuses, NRBC count was the strongest predictor of neonatal intraventricular hemorrhage, neonatal respiratory distress and neonatal death. An elevated NRBC count independently predicts adverse perinatal outcome in growth-restricted fetuses. Copyright 2000 S. Karger AG, Basel.

  8. Relationship between plasma uridine and urinary urea excretion.

    PubMed

    Ka, Tuneyoshi; Inokuchi, Taku; Tamada, Daisuke; Suda, Michio; Tsutsumi, Zenta; Okuda, Chihiro; Yamamoto, Asako; Takahashi, Sumio; Moriwaki, Yuji; Yamamoto, Tetsuya

    2010-03-01

    To investigate whether the concentration of uridine in plasma is related to the urinary excretion of urea, 45 healthy male subjects with normouricemia and normal blood pressure were studied after providing informed consent. Immediately after collection of 24-hour urine, blood samples were drawn after an overnight fast except for water. The contents of ingested foods during the 24-hour urine collection period were described by the subjects and analyzed by a dietician. Simple regression analysis showed that plasma uridine was correlated with the urinary excretions of urea (R = 0.41, P < .01), uric acid (R = 0.36, P < .05), and uridine (R = 0.30, P < .05), as well as uric acid clearance (R = 0.35, P < .05) and purine intake (R = 0.30, P < .05). In contrast, multiple regression analysis showed a positive relationship only between plasma uridine and urinary excretion of urea. These results suggest that an increase in de novo pyrimidine synthesis leads to an increased concentration of uridine in plasma via nitrogen catabolism in healthy subjects with normouricemia and normal blood pressure. (c) 2010 Elsevier Inc. All rights reserved.

  9. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Eutrophication risk assessment in coastal embayments using simple statistical models.

    PubMed

    Arhonditsis, G; Eleftheriadou, M; Karydis, M; Tsirtsis, G

    2003-09-01

    A statistical methodology is proposed for assessing the risk of eutrophication in marine coastal embayments. The procedure followed was the development of regression models relating the levels of chlorophyll a (Chl) with the concentration of the limiting nutrient--usually nitrogen--and the renewal rate of the systems. The method was applied in the Gulf of Gera, Island of Lesvos, Aegean Sea and a surrogate for renewal rate was created using the Canberra metric as a measure of the resemblance between the Gulf and the oligotrophic waters of the open sea in terms of their physical, chemical and biological properties. The Chl-total dissolved nitrogen-renewal rate regression model was the most significant, accounting for 60% of the variation observed in Chl. Predicted distributions of Chl for various combinations of the independent variables, based on Bayesian analysis of the models, enabled comparison of the outcomes of specific scenarios of interest as well as further analysis of the system dynamics. The present statistical approach can be used as a methodological tool for testing the resilience of coastal ecosystems under alternative managerial schemes and levels of exogenous nutrient loading.

  11. [Health for All-Italia: an indicator system on health].

    PubMed

    Burgio, Alessandra; Crialesi, Roberta; Loghi, Marzia

    2003-01-01

    The Health for All - Italia information system collects health data from several sources. It is intended to be a cornerstone for the achievement of an overview about health in Italy. Health is analyzed at different levels, ranging from health services, health needs, lifestyles, demographic, social, economic and environmental contexts. The database associated software allows to pin down statistical data into graphs and tables, and to carry out simple statistical analysis. It is therefore possible to view the indicators' time series, make simple projections and compare the various indicators over the years for each territorial unit. This is possible by means of tables, graphs (histograms, line graphs, frequencies, linear regression with calculation of correlation coefficients, etc) and maps. These charts can be exported to other programs (i.e. Word, Excel, Power Point), or they can be directly printed in color or black and white.

  12. Outcomes of an intervention to improve hospital antibiotic prescribing: interrupted time series with segmented regression analysis.

    PubMed

    Ansari, Faranak; Gray, Kirsteen; Nathwani, Dilip; Phillips, Gabby; Ogston, Simon; Ramsay, Craig; Davey, Peter

    2003-11-01

    To evaluate an intervention to reduce inappropriate use of key antibiotics with interrupted time series analysis. The intervention is a policy for appropriate use of Alert Antibiotics (carbapenems, glycopeptides, amphotericin, ciprofloxacin, linezolid, piperacillin-tazobactam and third-generation cephalosporins) implemented through concurrent, patient-specific feedback by clinical pharmacists. Statistical significance and effect size were calculated by segmented regression analysis of interrupted time series of drug use and cost for 2 years before and after the intervention started. Use of Alert Antibiotics increased before the intervention started but decreased steadily for 2 years thereafter. The changes in slope of the time series were 0.27 defined daily doses/100 bed-days per month (95% CI 0.19-0.34) and pound 1908 per month (95% CI pound 1238- pound 2578). The cost of development, dissemination and implementation of the intervention ( pound 20133) was well below the most conservative estimate of the reduction in cost ( pound 133296), which is the lower 95% CI of effect size assuming that cost would not have continued to increase without the intervention. However, if use had continued to increase, the difference between predicted and actual cost of Alert Antibiotics was pound 572448 (95% CI pound 435696- pound 709176) over the 24 months after the intervention started. Segmented regression analysis of pharmacy stock data is a simple, practical and robust method for measuring the impact of interventions to change prescribing. The Alert Antibiotic Monitoring intervention was associated with significant decreases in total use and cost in the 2 years after the programme was implemented. In our hospital, the value of the data far exceeded the cost of processing and analysis.

  13. Quantitation & Case-Study-Driven Inquiry to Enhance Yeast Fermentation Studies

    ERIC Educational Resources Information Center

    Grammer, Robert T.

    2012-01-01

    We propose a procedure for the assay of fermentation in yeast in microcentrifuge tubes that is simple and rapid, permitting assay replicates, descriptive statistics, and the preparation of line graphs that indicate reproducibility. Using regression and simple derivatives to determine initial velocities, we suggest methods to compare the effects of…

  14. Validation of the Simple View of Reading in Hebrew--A Semitic Language

    ERIC Educational Resources Information Center

    Joshi, R. Malatesha; Ji, Xuejun Ryan; Breznitz, Zvia; Amiel, Meirav; Yulia, Astri

    2015-01-01

    The Simple View of Reading (SVR) in Hebrew was tested by administering decoding, listening comprehension, and reading comprehension measures to 1,002 students from Grades 2 to 10 in the northern part of Israel. Results from hierarchical regression analyses supported the SVR in Hebrew with decoding and listening comprehension measures explaining…

  15. Discrimination of honeys using colorimetric sensor arrays, sensory analysis and gas chromatography techniques.

    PubMed

    Tahir, Haroon Elrasheid; Xiaobo, Zou; Xiaowei, Huang; Jiyong, Shi; Mariod, Abdalbasit Adam

    2016-09-01

    Aroma profiles of six honey varieties of different botanical origins were investigated using colorimetric sensor array, gas chromatography-mass spectrometry (GC-MS) and descriptive sensory analysis. Fifty-eight aroma compounds were identified, including 2 norisoprenoids, 5 hydrocarbons, 4 terpenes, 6 phenols, 7 ketones, 9 acids, 12 aldehydes and 13 alcohols. Twenty abundant or active compounds were chosen as key compounds to characterize honey aroma. Discrimination of the honeys was subsequently implemented using multivariate analysis, including hierarchical clustering analysis (HCA) and principal component analysis (PCA). Honeys of the same botanical origin were grouped together in the PCA score plot and HCA dendrogram. SPME-GC/MS and colorimetric sensor array were able to discriminate the honeys effectively with the advantages of being rapid, simple and low-cost. Moreover, partial least squares regression (PLSR) was applied to indicate the relationship between sensory descriptors and aroma compounds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Personality traits associated with intrinsic academic motivation in medical students.

    PubMed

    Tanaka, Masaaki; Mizuno, Kei; Fukuda, Sanae; Tajima, Seiki; Watanabe, Yasuyoshi

    2009-04-01

    Motivation is one of the most important psychological concepts in education and is related to academic outcomes in medical students. In this study, the relationships between personality traits and intrinsic academic motivation were examined in medical students. The study group consisted of 119 Year 2 medical students at Osaka City University Graduate School of Medicine. They completed questionnaires dealing with intrinsic academic motivation (the Intrinsic Motivation Scale toward Learning) and personality (the Temperament and Character Inventory [TCI]). On simple regression analyses, the TCI dimensions of persistence, self-directedness, co-operativeness and self-transcendence were positively associated with intrinsic academic motivation. On multiple regression analysis adjusted for age and gender, the TCI dimensions of persistence, self-directedness and self-transcendence were positively associated with intrinsic academic motivation. The temperament dimension of persistence and the character dimensions of self-directedness and self-transcendence are associated with intrinsic academic motivation in medical students.

  17. New 1,6-heptadienes with pyrimidine bases attached: Syntheses and spectroscopic analyses

    NASA Astrophysics Data System (ADS)

    Hammud, Hassan H.; Ghannoum, Amer M.; Fares, Fares A.; Abramian, Lara K.; Bouhadir, Kamal H.

    2008-06-01

    A simple, high yielding synthesis leading to the functionalization of some pyrimidine bases with a 1,6-heptadienyl moiety spaced from the N - 1 position by a methylene group is described. A key step in this synthesis involves a Mitsunobu reaction by coupling 3N-benzoyluracil and 3N-benzoylthymine to 2-allyl-pent-4-en-1-ol followed by alkaline hydrolysis of the 3N-benzoyl protecting groups. This protocol should eventually lend itself to the synthesis of a host of N-alkylated nucleoside analogs. The absorption and emission properties of these pyrimidine derivatives ( 3- 6) were studied in solvents of different physical properties. Computerized analysis and multiple regression techniques were applied to calculate the regression and correlation coefficients based on the equation that relates peak position λmax to the solvent parameters that depend on the H-bonding ability, refractive index, and dielectric constant of solvents.

  18. Bayesian function-on-function regression for multilevel functional data.

    PubMed

    Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S

    2015-09-01

    Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.

  19. Hyperopic photorefractive keratectomy and central islands

    NASA Astrophysics Data System (ADS)

    Gobbi, Pier Giorgio; Carones, Francesco; Morico, Alessandro; Vigo, Luca; Brancato, Rosario

    1998-06-01

    We have evaluated the refractive evolution in patients treated with yhyperopic PRK to assess the extent of the initial overcorrection and the time constant of regression. To this end, the time history of the refractive error (i.e. the difference between achieved and intended refractive correction) has been fitted by means of an exponential statistical model, giving information characterizing the surgical procedure with a direct clinical meaning. Both hyperopic and myopic PRk procedures have been analyzed by this method. The analysis of the fitting model parameters shows that hyperopic PRK patients exhibit a definitely higher initial overcorrection than myopic ones, and a regression time constant which is much longer. A common mechanism is proposed to be responsible for the refractive outcomes in hyperopic treatments and in myopic patients exhibiting significant central islands. The interpretation is in terms of superhydration of the central cornea, and is based on a simple physical model evaluating the amount of centripetal compression in the apical cornea.

  20. A surrogate model for thermal characteristics of stratospheric airship

    NASA Astrophysics Data System (ADS)

    Zhao, Da; Liu, Dongxu; Zhu, Ming

    2018-06-01

    A simple and accurate surrogate model is extremely needed to reduce the analysis complexity of thermal characteristics for a stratospheric airship. In this paper, a surrogate model based on the Least Squares Support Vector Regression (LSSVR) is proposed. The Gravitational Search Algorithm (GSA) is used to optimize hyper parameters. A novel framework consisting of a preprocessing classifier and two regression models is designed to train the surrogate model. Various temperature datasets of the airship envelope and the internal gas are obtained by a three-dimensional transient model for thermal characteristics. Using these thermal datasets, two-factor and multi-factor surrogate models are trained and several comparison simulations are conducted. Results illustrate that the surrogate models based on LSSVR-GSA have good fitting and generalization abilities. The pre-treated classification strategy proposed in this paper plays a significant role in improving the accuracy of the surrogate model.

  1. Regression analysis of sparse asynchronous longitudinal data.

    PubMed

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P

    2015-09-01

    We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.

  2. Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching.

    Treesearch

    E.H. Helmer; B. Ruefenacht

    2005-01-01

    Cloud-free optical satellite imagery simplifies remote sensing, but land-cover phenology limits existing solutions to persistent cloudiness to compositing temporally resolute, spatially coarser imagery. Here, a new strategy for developing cloud-free imagery at finer resolution permits simple automatic change detection. The strategy uses regression trees to predict...

  3. A Diagrammatic Exposition of Regression and Instrumental Variables for the Beginning Student

    ERIC Educational Resources Information Center

    Foster, Gigi

    2009-01-01

    Some beginning students of statistics and econometrics have difficulty with traditional algebraic approaches to explaining regression and related techniques. For these students, a simple and intuitive diagrammatic introduction as advocated by Kennedy (2008) may prove a useful framework to support further study. The author presents a series of…

  4. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    ERIC Educational Resources Information Center

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  5. Fitting program for linear regressions according to Mahon (1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trappitsch, Reto G.

    2018-01-09

    This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.

  6. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  7. Assistive Technologies for Second-Year Statistics Students Who Are Blind

    ERIC Educational Resources Information Center

    Erhardt, Robert J.; Shuman, Michael P.

    2015-01-01

    At Wake Forest University, a student who is blind enrolled in a second course in statistics. The course covered simple and multiple regression, model diagnostics, model selection, data visualization, and elementary logistic regression. These topics required that the student both interpret and produce three sets of materials: mathematical writing,…

  8. Refractive Status at Birth: Its Relation to Newborn Physical Parameters at Birth and Gestational Age

    PubMed Central

    Varghese, Raji Mathew; Sreenivas, Vishnubhatla; Puliyel, Jacob Mammen; Varughese, Sara

    2009-01-01

    Background Refractive status at birth is related to gestational age. Preterm babies have myopia which decreases as gestational age increases and term babies are known to be hypermetropic. This study looked at the correlation of refractive status with birth weight in term and preterm babies, and with physical indicators of intra-uterine growth such as the head circumference and length of the baby at birth. Methods All babies delivered at St. Stephens Hospital and admitted in the nursery were eligible for the study. Refraction was performed within the first week of life. 0.8% tropicamide with 0.5% phenylephrine was used to achieve cycloplegia and paralysis of accommodation. 599 newborn babies participated in the study. Data pertaining to the right eye is utilized for all the analyses except that for anisometropia where the two eyes were compared. Growth parameters were measured soon after birth. Simple linear regression analysis was performed to see the association of refractive status, (mean spherical equivalent (MSE), astigmatism and anisometropia) with each of the study variables, namely gestation, length, weight and head circumference. Subsequently, multiple linear regression was carried out to identify the independent predictors for each of the outcome parameters. Results Simple linear regression showed a significant relation between all 4 study variables and refractive error but in multiple regression only gestational age and weight were related to refractive error. The partial correlation of weight with MSE adjusted for gestation was 0.28 and that of gestation with MSE adjusted for weight was 0.10. Birth weight had a higher correlation to MSE than gestational age. Conclusion This is the first study to look at refractive error against all these growth parameters, in preterm and term babies at birth. It would appear from this study that birth weight rather than gestation should be used as criteria for screening for refractive error, especially in developing countries where the incidence of intrauterine malnutrition is higher. PMID:19214228

  9. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  10. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2003-01-01

    Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.

  11. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  12. Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model

    NASA Astrophysics Data System (ADS)

    Wilton, Darren C.

    The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).

  13. Reference Models for Structural Technology Assessment and Weight Estimation

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd

    2005-01-01

    Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.

  14. Biodiversity patterns along ecological gradients: unifying β-diversity indices.

    PubMed

    Szava-Kovats, Robert C; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients.

  15. Biodiversity Patterns along Ecological Gradients: Unifying β-Diversity Indices

    PubMed Central

    Szava-Kovats, Robert C.; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients. PMID:25330181

  16. True phosphorus digestibility and the endogenous phosphorus outputs associated with brown rice for weanling pigs measured by the simple linear regression analysis technique.

    PubMed

    Yang, H; Li, A K; Yin, Y L; Li, T J; Wang, Z R; Wu, G; Huang, R L; Kong, X F; Yang, C B; Kang, P; Deng, J; Wang, S X; Tan, B E; Hu, Q; Xing, F F; Wu, X; He, Q H; Yao, K; Liu, Z J; Tang, Z R; Yin, F G; Deng, Z Y; Xie, M Y; Fan, M Z

    2007-03-01

    The objectives of this study were to determine true phosphorus (P) digestibility, degradability of phytate-P complex and the endogenous P outputs associated with brown rice feeding in weanling pigs by using the simple linear regression analysis technique. Six barrows with an average initial body weight of 12.5 kg were fitted with a T-cannula and fed six diets according to a 6 × 6 Latin-square design. Six maize starch-based diets, containing six levels of P at 0.80, 1.36, 1.93, 2.49, 3.04, and 3.61 g/kg per kg dry-matter (DM) intake (DMI), were formulated with brown rice. Each experimental period lasted 10 days. After a 7-day adaptation, all faecal samples were collected on days 8 and 9. Ileal digesta samples were collected for a total of 24 h on day 10. The apparent ileal and faecal P digestibility values of brown rice were affected ( P < 0.01) by the P contents in the assay diets. The apparent ileal and faecal P digestibility values increased from - 48.0 to 36.7% and from - 35.6 to 40.0%, respectively, as P content increased from 0.80 to 3.61 g/kg DMI. Linear relationships ( P < 0.05), expressed as g/kg DMI, between the apparent ileal and faecal digestible P and dietary levels of P, suggested that true P digestibility and the endogenous P outputs associated with brown rice feeding could be determined by using the simple regression analysis technique. There were no differences ( P>0.05) in true P digestibility values (57.7 ± 5.4 v. 58.2 ± 5.9%), phytate P degradability (76.4 ± 6.7 v. 79.0 ± 4.4%) and the endogenous P outputs (0.812 ± 0..096 v. 0.725 ± 0.083 g/kg DMI) between the ileal and the faecal levels. The endogenous faecal P output represented 14 and 25% of the National Research Council (1998) recommended daily total and available P requirements in the weanling pig, respectively. About 58% of the total P in brown rice could be digested and absorbed by the weanling pig. Our results suggest that the large intestine of the weanling pigs does not play a significant role in the digestion of P in brown rice. Diet formulation on the basis of total or apparent P digestibility with brown rice may lead to P overfeeding and excessive P excretion in pigs.

  17. Relationship between body composition and postural control in prepubertal overweight/obese children: A cross-sectional study.

    PubMed

    Villarrasa-Sapiña, Israel; Álvarez-Pitti, Julio; Cabeza-Ruiz, Ruth; Redón, Pau; Lurbe, Empar; García-Massó, Xavier

    2018-02-01

    Excess body weight during childhood causes reduced motor functionality and problems in postural control, a negative influence which has been reported in the literature. Nevertheless, no information regarding the effect of body composition on the postural control of overweight and obese children is available. The objective of this study was therefore to establish these relationships. A cross-sectional design was used to establish relationships between body composition and postural control variables obtained in bipedal eyes-open and eyes-closed conditions in twenty-two children. Centre of pressure signals were analysed in the temporal and frequency domains. Pearson correlations were applied to establish relationships between variables. Principal component analysis was applied to the body composition variables to avoid potential multicollinearity in the regression models. These principal components were used to perform a multiple linear regression analysis, from which regression models were obtained to predict postural control. Height and leg mass were the body composition variables that showed the highest correlation with postural control. Multiple regression models were also obtained and several of these models showed a higher correlation coefficient in predicting postural control than simple correlations. These models revealed that leg and trunk mass were good predictors of postural control. More equations were found in the eyes-open than eyes-closed condition. Body weight and height are negatively correlated with postural control. However, leg and trunk mass are better postural control predictors than arm or body mass. Finally, body composition variables are more useful in predicting postural control when the eyes are open. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Automated Screening of Children With Obstructive Sleep Apnea Using Nocturnal Oximetry: An Alternative to Respiratory Polygraphy in Unattended Settings

    PubMed Central

    Álvarez, Daniel; Alonso-Álvarez, María L.; Gutiérrez-Tobal, Gonzalo C.; Crespo, Andrea; Kheirandish-Gozal, Leila; Hornero, Roberto; Gozal, David; Terán-Santos, Joaquín; Del Campo, Félix

    2017-01-01

    Study Objectives: Nocturnal oximetry has become known as a simple, readily available, and potentially useful diagnostic tool of childhood obstructive sleep apnea (OSA). However, at-home respiratory polygraphy (HRP) remains the preferred alternative to polysomnography (PSG) in unattended settings. The aim of this study was twofold: (1) to design and assess a novel methodology for pediatric OSA screening based on automated analysis of at-home oxyhemoglobin saturation (SpO2), and (2) to compare its diagnostic performance with HRP. Methods: SpO2 recordings were parameterized by means of time, frequency, and conventional oximetric measures. Logistic regression models were optimized using genetic algorithms (GAs) for three cutoffs for OSA: 1, 3, and 5 events/h. The diagnostic performance of logistic regression models, manual obstructive apnea-hypopnea index (OAHI) from HRP, and the conventional oxygen desaturation index ≥ 3% (ODI3) were assessed. Results: For a cutoff of 1 event/h, the optimal logistic regression model significantly outperformed both conventional HRP-derived ODI3 and OAHI: 85.5% accuracy (HRP 74.6%; ODI3 65.9%) and 0.97 area under the receiver operating characteristics curve (AUC) (HRP 0.78; ODI3 0.75) were reached. For a cutoff of 3 events/h, the logistic regression model achieved 83.4% accuracy (HRP 85.0%; ODI3 74.5%) and 0.96 AUC (HRP 0.93; ODI3 0.85) whereas using a cutoff of 5 events/h, oximetry reached 82.8% accuracy (HRP 85.1%; ODI3 76.7) and 0.97 AUC (HRP 0.95; ODI3 0.84). Conclusions: Automated analysis of at-home SpO2 recordings provide accurate detection of children with high pretest probability of OSA. Thus, unsupervised nocturnal oximetry may enable a simple and effective alternative to HRP and PSG in unattended settings. Citation: Álvarez D, Alonso-Álvarez ML, Gutiérrez-Tobal GC, Crespo A, Kheirandish-Gozal L, Hornero R, Gozal D, Terán-Santos J, Del Campo F. Automated screening of children with obstructive sleep apnea using nocturnal oximetry: an alternative to respiratory polygraphy in unattended settings. J Clin Sleep Med. 2017;13(5):693–702. PMID:28356177

  19. The application of dimensional analysis to the problem of solar wind-magnetosphere energy coupling

    NASA Technical Reports Server (NTRS)

    Bargatze, L. F.; Mcpherron, R. L.; Baker, D. N.; Hones, E. W., Jr.

    1984-01-01

    The constraints imposed by dimensional analysis are used to find how the solar wind-magnetosphere energy transfer rate depends upon interplanetary parameters. The analyses assume that only magnetohydrodynamic processes are important in controlling the rate of energy transfer. The study utilizes ISEE-3 solar wind observations, the AE index, and UT from three 10-day intervals during the International Magnetospheric Study. Simple linear regression and histogram techniques are used to find the value of the magnetohydrodynamic coupling exponent, alpha, which is consistent with observations of magnetospheric response. Once alpha is estimated, the form of the solar wind energy transfer rate is obtained by substitution into an equation of the interplanetary variables whose exponents depend upon alpha.

  20. A simple measure of cognitive reserve is relevant for cognitive performance in MS patients.

    PubMed

    Della Corte, Marida; Santangelo, Gabriella; Bisecco, Alvino; Sacco, Rosaria; Siciliano, Mattia; d'Ambrosio, Alessandro; Docimo, Renato; Cuomo, Teresa; Lavorgna, Luigi; Bonavita, Simona; Tedeschi, Gioacchino; Gallo, Antonio

    2018-05-04

    Cognitive reserve (CR) contributes to preserve cognition despite brain damage. This theory has been applied to multiple sclerosis (MS) to explain the partial relationship between cognition and MRI markers of brain pathology. Our aim was to determine the relationship between two measures of CR and cognition in MS. One hundred and forty-seven MS patients were enrolled. Cognition was assessed using the Rao's Brief Repeatable Battery and the Stroop Test. CR was measured as the vocabulary subtest of the WAIS-R score (VOC) and the number of years of formal education (EDU). Regression analysis included raw score data on each neuropsychological (NP) test as dependent variables and demographic/clinical parameters, VOC, and EDU as independent predictors. A binary logistic regression analysis including clinical/CR parameters as covariates and absence/presence of cognitive deficits as dependent variables was performed too. VOC, but not EDU, was strongly correlated with performances at all ten NP tests. EDU was correlated with executive performances. The binary logistic regression showed that only the Expanded Disability Status Scale (EDSS) and VOC were independently correlated with the presence/absence of CD. The lower the VOC and/or the higher the EDSS, the higher the frequency of CD. In conclusion, our study supports the relevance of CR in subtending cognitive performances and the presence of CD in MS patients.

  1. A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.

    PubMed

    Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H

    2017-07-01

    Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.

  2. Modelling of capital asset pricing by considering the lagged effects

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Bon, A. Talib bin; Supian, S.

    2017-01-01

    In this paper the problem of modelling the Capital Asset Pricing Model (CAPM) with the effect of the lagged is discussed. It is assumed that asset returns are analysed influenced by the market return and the return of risk-free assets. To analyse the relationship between asset returns, the market return, and the return of risk-free assets, it is conducted by using a regression equation of CAPM, and regression equation of lagged distributed CAPM. Associated with the regression equation lagged CAPM distributed, this paper also developed a regression equation of Koyck transformation CAPM. Results of development show that the regression equation of Koyck transformation CAPM has advantages, namely simple as it only requires three parameters, compared with regression equation of lagged distributed CAPM.

  3. Statistical evaluation of stability data: criteria for change-over-time and data variability.

    PubMed

    Bar, Raphael

    2003-01-01

    In a recently issued ICH Q1E guidance on evaluation of stability data of drug substances and products, the need to perform a statistical extrapolation of a shelf-life of a drug product or a retest period for a drug substance is based heavily on whether data exhibit a change-over-time and/or variability. However, this document suggests neither measures nor acceptance criteria of these two parameters. This paper demonstrates a useful application of simple statistical parameters for determining whether sets of stability data from either accelerated or long-term storage programs exhibit a change-over-time and/or variability. These parameters are all derived from a simple linear regression analysis first performed on the stability data. The p-value of the slope of the regression line is taken as a measure for change-over-time, and a value of 0.25 is suggested as a limit to insignificant change of the quantitative stability attributes monitored. The minimal process capability index, Cpk, calculated from the standard deviation of the regression line, is suggested as a measure for variability with a value of 2.5 as a limit for an insignificant variability. The usefulness of the above two parameters, p-value and Cpk, was demonstrated on stability data of a refrigerated drug product and on pooled data of three batches of a drug substance. In both cases, the determined parameters allowed characterization of the data in terms of change-over-time and variability. Consequently, complete evaluation of the stability data could be pursued according to the ICH guidance. It is believed that the application of the above two parameters with their acceptance criteria will allow a more unified evaluation of stability data.

  4. Testing the Hypothesis of a Homoscedastic Error Term in Simple, Nonparametric Regression

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    2006-01-01

    Consider the nonparametric regression model Y = m(X)+ [tau](X)[epsilon], where X and [epsilon] are independent random variables, [epsilon] has a median of zero and variance [sigma][squared], [tau] is some unknown function used to model heteroscedasticity, and m(X) is an unknown function reflecting some conditional measure of location associated…

  5. Simple ultrasound rules to distinguish between benign and malignant adnexal masses before surgery: prospective validation by IOTA group

    PubMed Central

    Ameye, Lieveke; Fischerova, Daniela; Epstein, Elisabeth; Melis, Gian Benedetto; Guerriero, Stefano; Van Holsbeke, Caroline; Savelli, Luca; Fruscio, Robert; Lissoni, Andrea Alberto; Testa, Antonia Carla; Veldman, Joan; Vergote, Ignace; Van Huffel, Sabine; Bourne, Tom; Valentin, Lil

    2010-01-01

    Objectives To prospectively assess the diagnostic performance of simple ultrasound rules to predict benignity/malignancy in an adnexal mass and to test the performance of the risk of malignancy index, two logistic regression models, and subjective assessment of ultrasonic findings by an experienced ultrasound examiner in adnexal masses for which the simple rules yield an inconclusive result. Design Prospective temporal and external validation of simple ultrasound rules to distinguish benign from malignant adnexal masses. The rules comprised five ultrasonic features (including shape, size, solidity, and results of colour Doppler examination) to predict a malignant tumour (M features) and five to predict a benign tumour (B features). If one or more M features were present in the absence of a B feature, the mass was classified as malignant. If one or more B features were present in the absence of an M feature, it was classified as benign. If both M features and B features were present, or if none of the features was present, the simple rules were inconclusive. Setting 19 ultrasound centres in eight countries. Participants 1938 women with an adnexal mass examined with ultrasound by the principal investigator at each centre with a standardised research protocol. Reference standard Histological classification of the excised adnexal mass as benign or malignant. Main outcome measures Diagnostic sensitivity and specificity. Results Of the 1938 patients with an adnexal mass, 1396 (72%) had benign tumours, 373 (19.2%) had primary invasive tumours, 111 (5.7%) had borderline malignant tumours, and 58 (3%) had metastatic tumours in the ovary. The simple rules yielded a conclusive result in 1501 (77%) masses, for which they resulted in a sensitivity of 92% (95% confidence interval 89% to 94%) and a specificity of 96% (94% to 97%). The corresponding sensitivity and specificity of subjective assessment were 91% (88% to 94%) and 96% (94% to 97%). In the 357 masses for which the simple rules yielded an inconclusive result and with available results of CA-125 measurements, the sensitivities were 89% (83% to 93%) for subjective assessment, 50% (42% to 58%) for the risk of malignancy index, 89% (83% to 93%) for logistic regression model 1, and 82% (75% to 87%) for logistic regression model 2; the corresponding specificities were 78% (72% to 83%), 84% (78% to 88%), 44% (38% to 51%), and 48% (42% to 55%). Use of the simple rules as a triage test and subjective assessment for those masses for which the simple rules yielded an inconclusive result gave a sensitivity of 91% (88% to 93%) and a specificity of 93% (91% to 94%), compared with a sensitivity of 90% (88% to 93%) and a specificity of 93% (91% to 94%) when subjective assessment was used in all masses. Conclusions The use of the simple rules has the potential to improve the management of women with adnexal masses. In adnexal masses for which the rules yielded an inconclusive result, subjective assessment of ultrasonic findings by an experienced ultrasound examiner was the most accurate diagnostic test; the risk of malignancy index and the two regression models were not useful. PMID:21156740

  6. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  7. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  8. [Value of the albumin to globulin ratio in predicting severity and prognosis in myasthenia gravis patients].

    PubMed

    Yang, D H; Su, Z Q; Chen, Y; Chen, Z B; Ding, Z N; Weng, Y Y; Li, J; Li, X; Tong, Q L; Han, Y X; Zhang, X

    2016-03-08

    To assess the predictive value of the albumin to globulin ratio (AGR) in evaluation of disease severity and prognosis in myasthenia gravis patients. A total of 135 myasthenia gravis (MG) patients were enrolled between February 2009 and March 2015. The AGR was detected on the first day of hospitalization and ranked from lowest to highest, and the patients were divided into three equal tertiles according to the AGR values, which were T1 (AGR <1.34), T2 (1.34≤AGR≤1.53) and T3 (AGR>1.53). The Kaplan-Meier curve was used to evaluate the prognostic value of AGR. Cox model analysis was used to evaluate the relevant factors. Multivariate Logistic regression analysis was used to find the predictors of myasthenia crisis during hospitalization. The median length of hospital stay for each tertile was: for the T1 21 days (15-35.5), T2 18 days (14-27.5), and T3 16 days (12-22.5) (P<0.01), and Kaplan-Meier curves showed significant difference among the three groups. In the univariate model, serum albumin, creatinine, AGR and MGFA clinical classification were related to prognosis of myasthenia gravis. At the multivariate Cox regression analysis, the AGR (P<0.001) and MGFA clinical classification (P<0.001) were independent predictive factors of disease severity and prognosis in myasthenia gravis patients. Respectively, the hazard ratio (HR) were 4.655 (95% CI: 2.355-9.202) and 0.596 (95% CI: 0.492-0.723). Multivariate Logistic regression analysis showed the AGR (P<0.001) and MGFA clinical classification were related to myasthenia crisis. The AGR may represent a simple, potentially useful predictive biomarker for evaluating the disease severity and prognosis of patients with myasthenia gravis.

  9. cp-R, an interface the R programming language for clinical laboratory method comparisons.

    PubMed

    Holmes, Daniel T

    2015-02-01

    Clinical scientists frequently need to compare two different bioanalytical methods as part of assay validation/monitoring. As a matter necessity, regression methods for quantitative comparison in clinical chemistry, hematology and other clinical laboratory disciplines must allow for error in both the x and y variables. Traditionally the methods popularized by 1) Deming and 2) Passing and Bablok have been recommended. While commercial tools exist, no simple open source tool is available. The purpose of this work was to develop and entirely open-source GUI-driven program for bioanalytical method comparisons capable of performing these regression methods and able to produce highly customized graphical output. The GUI is written in python and PyQt4 with R scripts performing regression and graphical functions. The program can be run from source code or as a pre-compiled binary executable. The software performs three forms of regression and offers weighting where applicable. Confidence bands of the regression are calculated using bootstrapping for Deming and Passing Bablok methods. Users can customize regression plots according to the tools available in R and can produced output in any of: jpg, png, tiff, bmp at any desired resolution or ps and pdf vector formats. Bland Altman plots and some regression diagnostic plots are also generated. Correctness of regression parameter estimates was confirmed against existing R packages. The program allows for rapid and highly customizable graphical output capable of conforming to the publication requirements of any clinical chemistry journal. Quick method comparisons can also be performed and cut and paste into spreadsheet or word processing applications. We present a simple and intuitive open source tool for quantitative method comparison in a clinical laboratory environment. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  11. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  12. Predictive and Feedback Performance Errors are Signaled in the Simple Spike Discharge of Individual Purkinje Cells

    PubMed Central

    Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.

    2012-01-01

    The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173

  13. Comprehension of texts by deaf elementary school students: The role of grammatical understanding.

    PubMed

    Barajas, Carmen; González-Cuenca, Antonia M; Carrero, Francisco

    2016-12-01

    The aim of this study was to analyze how the reading process of deaf Spanish elementary school students is affected both by those components that explain reading comprehension according to the Simple View of Reading model: decoding and linguistic comprehension (both lexical and grammatical) and by other variables that are external to the reading process: the type of assistive technology used, the age at which it is implanted or fitted, the participant's socioeconomic status and school stage. Forty-seven students aged between 6 and 13 years participated in the study; all presented with profound or severe prelingual bilateral deafness, and all used digital hearing aids or cochlear implants. Students' text comprehension skills, decoding skills and oral comprehension skills (both lexical and grammatical) were evaluated. Logistic regression analysis indicated that neither the type of assistive technology, age at time of fitting or activation, socioeconomic status, nor school stage could predict the presence or absence of difficulties in text comprehension. Furthermore, logistic regression analysis indicated that neither decoding skills, nor lexical age could predict competency in text comprehension; however, grammatical age could explain 41% of the variance. Probing deeper into the effect of grammatical understanding, logistic regression analysis indicated that a participant's understanding of reversible passive object-verb-subject sentences and reversible predicative subject-verb-object sentences accounted for 38% of the variance in text comprehension. Based on these results, we suggest that it might be beneficial to devise and evaluate interventions that focus specifically on grammatical comprehension. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. [Quantitative analysis of nucleotide mixtures with terahertz time domain spectroscopy].

    PubMed

    Zhang, Zeng-yan; Xiao, Ti-qiao; Zhao, Hong-wei; Yu, Xiao-han; Xi, Zai-jun; Xu, Hong-jie

    2008-09-01

    Adenosine, thymidine, guanosine, cytidine and uridine form the building blocks of ribose nucleic acid (RNA) and deoxyribose nucleic acid (DNA). Nucleosides and their derivants are all have biological activities. Some of them can be used as medicine directly or as materials to synthesize other medicines. It is meaningful to detect the component and content in nucleosides mixtures. In the present paper, components and contents of the mixtures of adenosine, thymidine, guanosine, cytidine and uridine were analyzed. THz absorption spectra of pure nucleosides were set as standard spectra. The mixture's absorption spectra were analyzed by linear regression with non-negative constraint to identify the components and their relative content in the mixtures. The experimental and analyzing results show that it is simple and effective to get the components and their relative percentage in the mixtures by terahertz time domain spectroscopy with a relative error less than 10%. Component which is absent could be excluded exactly by this method, and the error sources were also analyzed. All the experiments and analysis confirms that this method is of no damage or contamination to the sample. This means that it will be a simple, effective and new method in biochemical materials analysis, which extends the application field of THz-TDS.

  15. Association of Apical Longitudinal Rotation with Right Ventricular Performance in Patients with Pulmonary Hypertension: Insights into Overestimation of Tricuspid Annular Plane Systolic Excursion.

    PubMed

    Motoji, Yoshiki; Tanaka, Hidekazu; Fukuda, Yuko; Sano, Hiroyuki; Ryo, Keiko; Sawa, Takuma; Miyoshi, Tatsuya; Imanishi, Junichi; Mochizuki, Yasuhide; Tatsumi, Kazuhiro; Matsumoto, Kensuke; Emoto, Noriaki; Hirata, Ken-ichi

    2016-02-01

    Current guidelines recommend the routine use of tricuspid annular plane systolic excursion (TAPSE) as a simple method for estimating right ventricular (RV) function. However, when ventricular apical longitudinal rotation (apical-LR) occurs in pulmonary hypertension (PH) patients, it may result in overestimated TAPSE. We studied 105 patients with PH defined as mean pulmonary artery pressure >25 mmHg at rest measured by right heart cardiac catheterization. TAPSE was defined as the maximum displacement during systole in the RV-focused apical four-chamber view. RV free-wall longitudinal speckle tracking strain (RV-free) was calculated by averaging 3 regional peak systolic strains. The apical-LR was measured at the peak rotation in the apical region including both left and right ventricle. The eccentricity index (EI) was defined as the ratio of the length of 2 perpendicular minor-axis diameters, one of which bisected and was perpendicular to the interventricular septum, and was obtained at end-systole (EI-sys) and end-diastole (EI-dia). Twenty age-, gender-, and left ventricular ejection fraction-matched normal controls were studied for comparison. The apical-LR in PH patients was significantly lower than that in normal controls (-3.4 ± 2.7° vs. -1.3 ± 1.9°, P = 0.001). Simple linear regression analysis showed that gender, TAPSE, EI-sys, and EI-dia/EI-sys were associated with apical-LR, but RV-free was not. Multiple regression analysis demonstrated that gender, EI-dia/EI-sys, and TAPSE were independent determinants of apical-LR. TAPSE may be overestimated in PH patients with clockwise rotation resulting from left ventricular compression. TAPSE should thus be evaluated carefully in PH patients with marked apical rotation. © 2015, Wiley Periodicals, Inc.

  16. National and regional analysis of road accidents in Spain.

    PubMed

    Tolón-Becerra, A; Lastra-Bravo, X; Flores-Parra, I

    2013-01-01

    In Spain, the absolute fatality figures decreased almost 50 percent between 1998 and 2009. Despite this great effort, road mortality is still of great concern to political authorities. Further progress requires efficient road safety policy based on an optimal set of measures and targets that consider the initial conditions and characteristics in each region. This study attempts to analyze road accidents in Spain and its provinces in time and space during 1998-2009. First, we analyzed daily, monthly, and nationwide (NUTS 0) development of road accidents, the correlation between logarithmic transformations of road accidents and territorial and socioeconomic variables, the causality by simple linear regression of road accidents and territorial and socioeconomic variables, and preliminary frequency by fast Fourier transform. Then we analyzed the annual trend in accidents in the Spanish provinces (NUTS 3) and found a correlation between the logarithmic transformations of the mortality rate, fatalities per fatal accident, and accidents resulting in injuries per inhabitant variables and population, population density, gross domestic product (GDP), length of road network, and area. Finally, causality was analyzed by simple linear regression. The most outstanding results were the negative correlation between mortality rate and population density in Spanish provinces, which has increased over time, and that road accidents in Spain have an approximate periodicity of 57 days. The fast Fourier transform analysis of road accident frequency in Spain was useful in identifying the periodic, harmonic components of accidents and casualties. The periodicity observed both for the period 1998-2009 and by year showed that the highest intensity in road accidents was bimonthly, despite the lower number of accidents and casualties in the spectra of amplitude and power and efforts to reduce the intensity and concentration during off-season travel (summer and December).

  17. Common and Distinctive Patterns of Cognitive Dysfunction in Children With Benign Epilepsy Syndromes.

    PubMed

    Cheng, Dazhi; Yan, Xiuxian; Gao, Zhijie; Xu, Keming; Zhou, Xinlin; Chen, Qian

    2017-07-01

    Childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes are the most common forms of benign epilepsy syndromes. Although cognitive dysfunctions occur in children with both childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes, the similarity between their patterns of underlying cognitive impairments is not well understood. To describe these patterns, we examined multiple cognitive functions in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes. In this study, 43 children with childhood absence epilepsy, 47 children with benign childhood epilepsy with centrotemporal spikes, and 64 control subjects were recruited; all received a standardized assessment (i.e., computerized test battery) assessing processing speed, spatial skills, calculation, language ability, intelligence, visual attention, and executive function. Groups were compared in these cognitive domains. Simple regression analysis was used to analyze the effects of epilepsy-related clinical variables on cognitive test scores. Compared with control subjects, children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes showed cognitive deficits in intelligence and executive function, but performed normally in language processing. Impairment in visual attention was specific to patients with childhood absence epilepsy, whereas impaired spatial ability was specific to the children with benign childhood epilepsy with centrotemporal spikes. Simple regression analysis showed syndrome-related clinical variables did not affect cognitive functions. This study provides evidence of both common and distinctive cognitive features underlying the relative cognitive difficulties in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes. Our data suggest that clinicians should pay particular attention to the specific cognitive deficits in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes, to allow for more discriminative and potentially more effective interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Analysis of reliability of professor recommendation letters based on concordance with self-introduction letter.

    PubMed

    Kim, Sang Hyun

    2013-12-01

    The purpose of this study was to examine the concordance between a checklist's categories of professor recommendation letters and characteristics of the self-introduction letter. Checklists of professor recommendation letters were analyzed and classified into cognitive, social, and affective domains. Simple correlation was performed to determine whether the characteristics of the checklists were concordant with those of the self-introduction letter. The difference in ratings of the checklists by pass or fail grades was analyzed by independent sample t-test. Logistic regression analysis was performed to determine whether a pass or fail grade was influenced by ratings on the checklists. The Cronbach alpha value of the checklists was 0.854. Initiative, as an affective domain, in the professor's recommendation letter was highly ranked among the six checklist categories. Self-directed learning in the self-introduction letter was influenced by a pass or fail grade by logistic regression analysis (p<0.05). Successful applicants received higher ratings than those who failed in every checklist category, particularly in problem-solving ability, communication skills, initiative, and morality (p<0.05). There was a strong correlation between cognitive and affective characteristics in the professor recommendation letters and the sum of all characteristics in the self-introduction letter.

  19. [The mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents].

    PubMed

    Yavuzer, Yasemin; Karataş, Zeynep

    2013-01-01

    This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.

  20. Factors affecting metacognition of undergraduate nursing students in a blended learning environment.

    PubMed

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2014-06-01

    This paper is a report of a study to examine the influence of demographic, learning involvement and learning performance variables on metacognition of undergraduate nursing students in a blended learning environment. A cross-sectional, correlational survey design was adopted. Ninety-nine students invited to participate in the study were enrolled in a professional nursing ethics course at a public nursing college. The blended learning intervention is basically an assimilation of classroom learning and online learning. Simple linear regression showed significant associations between frequency of online dialogues, the Case Analysis Attitude Scale scores, the Case Analysis Self Evaluation Scale scores, the Blended Learning Satisfaction Scale scores, and Metacognition Scale scores. Multiple linear regression indicated that frequency of online dialogues, the Case Analysis Self Evaluation Scale and the Blended Learning Satisfaction Scale were significant independent predictors of metacognition. Overall, the model accounted for almost half of the variance in metacognition. The blended learning module developed in this study proved successful in the end as a catalyst for the exercising of metacognitive abilities by the sample of nursing students. Learners are able to develop metacognitive ability in comprehension, argumentation, reasoning and various forms of higher order thinking through the blended learning process. © 2013 Wiley Publishing Asia Pty Ltd.

  1. Forecasting outbreaks of the Douglas-fir tussock moth from lower crown cocoon samples.

    Treesearch

    Richard R. Mason; Donald W. Scott; H. Gene Paul

    1993-01-01

    A predictive technique using a simple linear regression was developed to forecast the midcrown density of small tussock moth larvae from estimates of cocoon density in the previous generation. The regression estimator was derived from field samples of cocoons and larvae taken from a wide range of nonoutbreak tussock moth populations. The accuracy of the predictions was...

  2. An empirical model for estimating annual consumption by freshwater fish populations

    USGS Publications Warehouse

    Liao, H.; Pierce, C.L.; Larscheid, J.G.

    2005-01-01

    Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.

  3. Simple versus complex degenerative mitral valve disease.

    PubMed

    Javadikasgari, Hoda; Mihaljevic, Tomislav; Suri, Rakesh M; Svensson, Lars G; Navia, Jose L; Wang, Robert Z; Tappuni, Bassman; Lowry, Ashley M; McCurry, Kenneth R; Blackstone, Eugene H; Desai, Milind Y; Mick, Stephanie L; Gillinov, A Marc

    2018-07-01

    At a center where surgeons favor mitral valve (MV) repair for all subsets of leaflet prolapse, we compared results of patients undergoing repair for simple versus complex degenerative MV disease. From January 1985 to January 2016, 6153 patients underwent primary isolated MV repair for degenerative disease, 3101 patients underwent primary isolated MV repair for simple disease (posterior prolapse), and 3052 patients underwent primary isolated MV repair for complex disease (anterior or bileaflet prolapse), based on preoperative echocardiographic images. Logistic regression analysis was used to generate propensity scores for risk-adjusted comparisons (n = 2065 matched pairs). Durability was assessed by longitudinal recurrence of mitral regurgitation and reoperation. Compared with patients with simple disease, those undergoing repair of complex pathology were more likely to be younger and female (both P values < .0001) but with similar symptoms (P = .3). The most common repair technique was ring/band annuloplasty (3055/99% simple vs 3000/98% complex; P = .5), followed by leaflet resection (2802/90% simple vs 2249/74% complex; P < .0001). Among propensity-matched patients, recurrence of severe mitral regurgitation 10 years after repair was 6.2% for simple pathology versus 11% for complex pathology (P = .007), reoperation at 18 years was 6.3% for simple pathology versus 11% for complex pathology, and 20-year survival was 62% for simple pathology versus 61% for complex pathology (P = .6). Early surgical intervention has become more common in patients with degenerative MV disease, regardless of valve prolapse complexity or symptom status. Valve repair was associated with similarly low operative risk and time-related survival but less durability in complex disease. Lifelong annual echocardiographic surveillance after MV repair is recommended, particularly in patients with complex disease. Copyright © 2018 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  4. A comparative evaluation of end-emic and non-endemic region of visceral leishmaniasis (Kala-azar) in India with ground survey and space technology.

    PubMed

    Kesari, Shreekant; Bhunia, Gouri Sankar; Kumar, Vijay; Jeyaram, Algarswamy; Ranjan, Alok; Das, Pradeep

    2011-08-01

    In visceral leishmaniasis, phlebotomine vectors are targets for control measures. Understanding the ecosystem of the vectors is a prerequisite for creating these control measures. This study endeavours to delineate the suitable locations of Phlebotomus argentipes with relation to environmental characteristics between endemic and non-endemic districts in India. A cross-sectional survey was conducted on 25 villages in each district. Environmental data were obtained through remote sensing images and vector density was measured using a CDC light trap. Simple linear regression analysis was used to measure the association between climatic parameters and vector density. Using factor analysis, the relationship between land cover classes and P. argentipes density among the villages in both districts was investigated. The results of the regression analysis indicated that indoor temperature and relative humidity are the best predictors for P. argentipes distribution. Factor analysis confirmed breeding preferences for P. argentipes by landscape element. Minimum Normalised Difference Vegetation Index, marshy land and orchard/settlement produced high loading in an endemic region, whereas water bodies and dense forest were preferred in non-endemic sites. Soil properties between the two districts were studied and indicated that soil pH and moisture content is higher in endemic sites compared to non-endemic sites. The present study should be utilised to make critical decisions for vector surveillance and controlling Kala-azar disease vectors.

  5. Predictability and Quantification of Complex Groundwater Table Dynamics Driven by Irregular Surface Water Fluctuations

    NASA Astrophysics Data System (ADS)

    Xin, Pei; Wang, Shen S. J.; Shen, Chengji; Zhang, Zeyu; Lu, Chunhui; Li, Ling

    2018-03-01

    Shallow groundwater interacts strongly with surface water across a quarter of global land area, affecting significantly the terrestrial eco-hydrology and biogeochemistry. We examined groundwater behavior subjected to unimodal impulse and irregular surface water fluctuations, combining physical experiments, numerical simulations, and functional data analysis. Both the experiments and numerical simulations demonstrated a damped and delayed response of groundwater table to surface water fluctuations. To quantify this hysteretic shallow groundwater behavior, we developed a regression model with the Gamma distribution functions adopted to account for the dependence of groundwater behavior on antecedent surface water conditions. The regression model fits and predicts well the groundwater table oscillations resulting from propagation of irregular surface water fluctuations in both laboratory and large-scale aquifers. The coefficients of the Gamma distribution function vary spatially, reflecting the hysteresis effect associated with increased amplitude damping and delay as the fluctuation propagates. The regression model, in a relatively simple functional form, has demonstrated its capacity of reproducing high-order nonlinear effects that underpin the surface water and groundwater interactions. The finding has important implications for understanding and predicting shallow groundwater behavior and associated biogeochemical processes, and will contribute broadly to studies of groundwater-dependent ecology and biogeochemistry.

  6. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

    PubMed

    Bowden, Jack; Jackson, Chris

    2016-10-01

    The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

  7. The Role of Habit and Perceived Control on Health Behavior among Pregnant Women.

    PubMed

    Mullan, Barbara; Henderson, Joanna; Kothe, Emily; Allom, Vanessa; Orbell, Sheina; Hamilton, Kyra

    2016-05-01

    Many pregnant women do not adhere to physical activity and dietary recommendations. Research investigating what psychological processes might predict physical activity and healthy eating (fruit and vegetable consumption) during pregnancy is scant. We explored the role of intention, habit, and perceived behavioral control as predictors of physical activity and healthy eating. Pregnant women (N = 195, Mage = 30.17, SDage = 4.46) completed questionnaires at 2 time points. At Time 1, participants completed measures of intention, habit, and perceived behavioral control. At Time 2, participants reported on their behavior (physical activity and healthy eating) within the intervening week. Regression analysis determined whether Time 1 variables predicted behavior at Time 2. Interaction terms also were tested. Final regression models indicated that only intention and habit explained significant variance in physical activity, whereas habit and the interaction between intention and habit explained significant variance in healthy eating. Simple slopes analysis indicated that the relationship between intention and healthy eating behavior was only significant at high levels of habit. Findings highlight the influence of habit on behavior and suggest that automaticity interventions may be useful in changing health behaviors during pregnancy.

  8. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  9. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  10. Indirect spectrophotometric determination of arbutin, whitening agent through oxidation by periodate and complexation with ferric chloride

    NASA Astrophysics Data System (ADS)

    Barsoom, B. N.; Abdelsamad, A. M. E.; Adib, N. M.

    2006-07-01

    A simple and accurate spectrophotometric method for the determination of arbutin (glycosylated hydroquinone) is described. It is based on the oxidation of arbutin by periodate in presence of iodate. Excess periodate causes liberation of iodine at pH 8.0. The unreacted periodate is determined by measurement of the liberated iodine spectrophotometrically in the wavelength range (300-500 nm). A calibration curve was constructed for more accurate results and the correlation coefficient of linear regression analysis was -0.9778. The precision of this method was better than 6.17% R.S.D. ( n = 3). Regression analysis of Bear-Lambert plot shows good correlation in the concentration range 25-125 ug/ml. The identification limit was determined to be 25 ug/ml a detailed study of the reaction conditions was carried out, including effect of changing pH, time, temperature and volume of periodate. Analyzing pure and authentic samples containing arbutin tested the validity of the proposed method which has an average percent recovery of 100.86%. An alternative method is also proposed which involves a complexation reaction between arbutin and ferric chloride solution. The produced complex which is yellowish-green in color was determined spectophotometrically.

  11. A practical approach for linearity assessment of calibration curves under the International Union of Pure and Applied Chemistry (IUPAC) guidelines for an in-house validation of method of analysis.

    PubMed

    Sanagi, M Marsin; Nasir, Zalilah; Ling, Susie Lu; Hermawan, Dadan; Ibrahim, Wan Aini Wan; Naim, Ahmedy Abu

    2010-01-01

    Linearity assessment as required in method validation has always been subject to different interpretations and definitions by various guidelines and protocols. However, there are very limited applicable implementation procedures that can be followed by a laboratory chemist in assessing linearity. Thus, this work proposes a simple method for linearity assessment in method validation by a regression analysis that covers experimental design, estimation of the parameters, outlier treatment, and evaluation of the assumptions according to the International Union of Pure and Applied Chemistry guidelines. The suitability of this procedure was demonstrated by its application to an in-house validation for the determination of plasticizers in plastic food packaging by GC.

  12. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  13. Prediction of Water Quality Parameters Using Statistical Methods: A Case Study in a Specially Protected Area, Ankara, Turkey

    NASA Astrophysics Data System (ADS)

    Alp, E.; Yücel, Ö.; Özcan, Z.

    2014-12-01

    Turkey has been making many legal arrangements for sustainable water management during the harmonization process with the European Union. In order to make cost effective and efficient decisions, monitoring network in Turkey has been expanding. However, due to time and budget constraints, desired number of monitoring campaigns can not be carried. Hence, in this study, independent parameters that can be measured easily and quickly are used to estimate water quality parameters in Lake Mogan and Eymir using linear regression. Nonpoint sources are one of the major pollutant components in Eymir and Mogan lakes. In this paper, a correlation between easily measurable parameters, DO, temperature, electrical conductivity, pH, precipitation and dependent variables, TN, TP, COD, Chl-a, TSS, Total Coliform is investigated. Simple regression analysis is performed for each season in Eymir and Mogan lakes by using SPSS Statistical program using the water quality data collected between 2006-2012. Regression analysis demonstrated significant linear relationship between measured and simulated concentrations for TN (R2=0.86), TP (R2=0.85), TSS (R2=0.91), Chl-a (R2=0.94), COD (R2=0.99), T. Coliform (R2=0.97) which are the best results in each season for Eymir and Mogan Lakes. The overall results of this study shows that by using easily measurable parameters even in ungauged situation the water quality of lakes can be predicted. Moreover, the outputs obtained from the regression equations can be used as an input for water quality models such as phosphorus budget model which is used to calculate the required reduction in the external phosphorus load to Lake Mogan to meet the water quality standards.

  14. Human allometry: adult bodies are more nearly geometrically similar than regression analysis has suggested.

    PubMed

    Burton, Richard F

    2010-01-01

    It is almost a matter of dogma that human body mass in adults tends to vary roughly in proportion to the square of height (stature), as Quetelet stated in 1835. As he realised, perfect isometry or geometric similarity requires that body mass varies with height cubed, so there seems to be a trend for tall adults to be relatively much lighter than short ones. Much evidence regarding component tissues and organs seems to accord with this idea. However, the hypothesis is presented that the proportions of the body are actually very much less size-dependent. Past evidence has mostly been obtained by least-squares regression analysis, but this cannot generally give a true picture of the allometric relationships. This is because there is considerable scatter in the data (leading to a low correlation between mass and height) and because neither variable causally determines the other. The relevant regression equations, though often formulated in logarithmic terms, effectively treat the masses as proportional to (body height)(b). Values of b estimated by regression must usually underestimate the true functional values, doing so especially when mass and height are poorly correlated. It is therefore telling support for the hypothesis that published estimates of b both for the whole body (which range between 1.0 and 2.5) and for its component tissues and organs (which vary even more) correlate with the corresponding correlation coefficients for mass and height. There is no simple statistical technique for establishing the true functional relationships, but Monte Carlo modelling has shown that the results obtained for total body mass are compatible with a true height exponent of three. Other data, on relationships between body mass and the girths of various body parts such as the thigh and chest, are also more consistent with isometry than regression analysis has suggested. This too is demonstrated by modelling. It thus seems that much of anthropometry needs to be re-evaluated. It is not suggested that all organs and tissues scale equally with whole body size.

  15. Forecasting USAF JP-8 Fuel Needs

    DTIC Science & Technology

    2009-03-01

    versus complex ones. When we consider long -term forecasts, 5-years in this case, multiple regression outperforms ANN modeling within the specified...with more simple and easy-to-implement methods, versus complex ones. When we consider long -term 5-year forecasts, our multiple regression model...effort. The insight and experience was certainly appreciated. Special thanks to my Turkish peers for their continuous support and help during this long

  16. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

  17. Correlation between pesticide use in agriculture and adverse birth outcomes in Brazil: an ecological study.

    PubMed

    de Siqueira, Marília Teixeira; Braga, Cynthia; Cabral-Filho, José Eulálio; Augusto, Lia Giraldo da Silva; Figueiroa, José Natal; Souza, Ariani Impieri

    2010-06-01

    This ecological study analyzed the association between pesticide use and prematurity, low weight and congenital abnormality at birth, infant death by congenital abnormality, and fetal death in Brazil in 2001. Simple linear regression analysis has determined a positive association between pesticide use and prematurity, low birth weight, and congenital abnormality. The association between pesticide use and low birth weight (p = 0.045) and, congenital abnormality (p = 0.004) and infant death rate by congenital abnormality (p = 0.039) remained after the adjustment made by the proportion of pregnant women with a low number of prenatal care visits.

  18. The Regional Differences of Gpp Estimation by Solar Induced Fluorescence

    NASA Astrophysics Data System (ADS)

    Wang, X.; Lu, S.

    2018-04-01

    Estimating gross primary productivity (GPP) at large spatial scales is important for studying the global carbon cycle and global climate change. In this study, the relationship between solar-induced chlorophyll fluorescence (SIF) and GPP is analysed in different levels of annual average temperature and annual total precipitation respectively using simple linear regression analysis. The results showed high correlation between SIF and GPP, when the area satisfied annual average temperature in the range of -5 °C to 15 °C and the annual total precipitation is higher than 200 mm. These results can provide a basis for future estimation of GPP research.

  19. Employee resourcing strategies and universities' corporate image: A survey dataset.

    PubMed

    Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe

    2018-06-01

    The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.

  20. Statistical analysis of water-level, springflow, and streamflow data for the Edwards Aquifer in south-central Texas

    USGS Publications Warehouse

    Puente, Celso

    1976-01-01

    Water-level, springflow, and streamflow data were used to develop simple and multiple linear-regression equations for use in estimating water levels in wells and the flow of three major springs in the Edwards aquifer in the eastern San Antonio area. The equations provide daily, monthly, and annual estimates that compare very favorably with observed data. Analyses of geologic and hydrologic data indicate that the water discharged by the major springs is supplied primarily by regional underflow from the west and southwest and by local recharge in the infiltration area in northern Bexar, Comal, and Hays Counties.

  1. Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.

    A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.

  2. Validity and reliability of dental age estimation of teeth root translucency based on digital luminance determination.

    PubMed

    Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A

    2014-01-01

    In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.

  3. Changes in Clavicle Length and Maturation in Americans: 1840-1980.

    PubMed

    Langley, Natalie R; Cridlin, Sandra

    2016-01-01

    Secular changes refer to short-term biological changes ostensibly due to environmental factors. Two well-documented secular trends in many populations are earlier age of menarche and increasing stature. This study synthesizes data on maximum clavicle length and fusion of the medial epiphysis in 1840-1980 American birth cohorts to provide a comprehensive assessment of developmental and morphological change in the clavicle. Clavicles from the Hamann-Todd Human Osteological Collection (n = 354), McKern and Stewart Korean War males (n = 341), Forensic Anthropology Data Bank (n = 1,239), and the McCormick Clavicle Collection (n = 1,137) were used in the analysis. Transition analysis was used to evaluate fusion of the medial epiphysis (scored as unfused, fusing, or fused). Several statistical treatments were used to assess fluctuations in maximum clavicle length. First, Durbin-Watson tests were used to evaluate autocorrelation, and a local regression (LOESS) was used to identify visual shifts in the regression slope. Next, piecewise regression was used to fit linear regression models before and after the estimated breakpoints. Multiple starting parameters were tested in the range determined to contain the breakpoint, and the model with the smallest mean squared error was chosen as the best fit. The parameters from the best-fit models were then used to derive the piecewise models, which were compared with the initial simple linear regression models to determine which model provided the best fit for the secular change data. The epiphyseal union data indicate a decline in the age at onset of fusion since the early twentieth century. Fusion commences approximately four years earlier in mid- to late twentieth-century birth cohorts than in late nineteenth- and early twentieth-century birth cohorts. However, fusion is completed at roughly the same age across cohorts. The most significant decline in age at onset of epiphyseal union appears to have occurred since the mid-twentieth century. LOESS plots show a breakpoint in the clavicle length data around the mid-twentieth century in both sexes, and piecewise regression models indicate a significant decrease in clavicle length in the American population after 1940. The piecewise model provides a slightly better fit than the simple linear model. Since the model standard error is not substantially different from the piecewise model, an argument could be made to select the less complex linear model. However, we chose the piecewise model to detect changes in clavicle length that are overfitted with a linear model. The decrease in maximum clavicle length is in line with a documented narrowing of the American skeletal form, as shown by analyses of cranial and facial breadth and bi-iliac breadth of the pelvis. Environmental influences on skeletal form include increases in body mass index, health improvements, improved socioeconomic status, and elimination of infectious diseases. Secular changes in bony dimensions and skeletal maturation stipulate that medical and forensic standards used to deduce information about growth, health, and biological traits must be derived from modern populations.

  4. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A simple method for the extraction and identification of light density microplastics from soil.

    PubMed

    Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette

    2018-03-01

    This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Sense of coherence modifies the effect of overtime work on mental health.

    PubMed

    Ohta, Masanori; Higuchi, Yoshiyuki; Yamato, Hiroshi; Kumashiro, Masaharu; Sugimura, Hisamichi

    2015-01-01

    In the occupational health field, it is important to know how workload influences mental health. Overtime work and job strain appear to affect the mental health status of workers. Sense of coherence (SOC) may mediate the relationship between work stress and mental health. Since SOC represents a personal ability to manage psychological stressors, we hypothesized that a strong SOC would modify the adverse effect of an objective measure of overtime work on mental health. A total of 1,558 Japanese workers employed in an information technology company were asked to complete a 3-item SOC Questionnaire and 28-item General Health Questionnaire (GHQ) to assess mental health status. Workload was assessed by the actual amount of overtime work hours recorded by the company. Multiple regression analysis revealed a main effect of overtime work (β=0.08, p=0.0003) and SOC scores (β=0.41, p <0.0001) on GHQ scores. There was a tendency toward interaction between overtime work and SOC scores (β=0.05, p=0.051). Simple slope analysis supported this association (-1 SD below the mean, simple slope=0.04, SE=0.01, p < 0.0001; +1 SD above the mean, simple slope=0.01, SE=0.01, p=0.188). These results suggest that SOC buffers the mental health impacts of workload as measured by an objective index of overtime work, and should be considered when assessing the effects of workload on mental health.

  7. Parsimonious nonstationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Serago, Jake M.; Vogel, Richard M.

    2018-02-01

    There is now widespread awareness of the impact of anthropogenic influences on extreme floods (and droughts) and thus an increasing need for methods to account for such influences when estimating a frequency distribution. We introduce a parsimonious approach to nonstationary flood frequency analysis (NFFA) based on a bivariate regression equation which describes the relationship between annual maximum floods, x, and an exogenous variable which may explain the nonstationary behavior of x. The conditional mean, variance and skewness of both x and y = ln (x) are derived, and combined with numerous common probability distributions including the lognormal, generalized extreme value and log Pearson type III models, resulting in a very simple and general approach to NFFA. Our approach offers several advantages over existing approaches including: parsimony, ease of use, graphical display, prediction intervals, and opportunities for uncertainty analysis. We introduce nonstationary probability plots and document how such plots can be used to assess the improved goodness of fit associated with a NFFA.

  8. Length of Residence and Vehicle Ownership in Relation to Physical Activity Among U.S. Immigrants.

    PubMed

    Terasaki, Dale; Ornelas, India; Saelens, Brian

    2017-04-01

    Physical activity among U.S. immigrants over time is not well understood. Transportation may affect this trajectory. Using a survey of documented immigrants (N = 7240), we performed simple, then multivariable logistic regression to calculate ORs and 95 % CIs between length of residence (LOR) and both light-to-moderate (LPA) and vigorous (VPA) activity. We adjusted for demographic variables, then vehicle ownership to assess changes in ORs. Compared to new arrivals, all four LOR time-intervals were associated with lower odds of LPA and higher odds of VPA in simple analysis. All ORs for LPA remained significant after including demographics, but only one remained significant after adding vehicle ownership. Two ORs for VPA remained significant after including demographics and after adding vehicle ownership. Immigrants lower their light-to-moderate activity the longer they reside in the U.S., partly from substituting driving for walking. Efforts to maintain walking for transportation among immigrants are warranted.

  9. Analysis of the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair.

    PubMed

    Chen, Qi; Li, Yang; Shi, Bing; Yin, Heng; Zheng, Guang-Ning; Zheng, Qian

    2013-12-01

    The objective of this study was to analyze the correlative factors for velopharyngeal closure of patients with cleft palate after primary repair. Ninety-five nonsyndromic patients with cleft palate were enrolled. Two surgical techniques were applied in the patients: simple palatoplasty and combined palatoplasty with pharyngoplasty. All patients were assessed 6 months after the operation. The postoperative velopharyngeal closure (VPC) rate was compared by χ(2) test and the correlative factors were analyzed with logistic regression model. The postoperative VPC rate of young patients was higher than that of old patients, the group with incomplete cleft palate was higher than the group with complete cleft palate, and combined palatoplasty with pharyngoplasty was higher than simple palatoplasty. Operative age, cleft type, and surgical technique were the contributing factors for postoperative VPC rate. Operative age, cleft type, and surgical technique were significant factors influencing postoperative VPC rate of patients with cleft palate. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A Highly Efficient Design Strategy for Regression with Outcome Pooling

    PubMed Central

    Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.

    2014-01-01

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822

  11. A highly efficient design strategy for regression with outcome pooling.

    PubMed

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F

    2014-12-10

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Isovolumic relaxation period as an index of left ventricular relaxation under different afterload conditions--comparison with the time constant of left ventricular pressure decay in the dog.

    PubMed

    Ochi, H; Ikuma, I; Toda, H; Shimada, T; Morioka, S; Moriyama, K

    1989-12-01

    In order to determine whether isovolumic relaxation period (IRP) reflects left ventricular relaxation under different afterload conditions, 17 anesthetized, open chest dogs were studied, and the left ventricular pressure decay time constant (T) was calculated. In 12 dogs, angiotensin II and nitroprusside were administered, with the heart rate constant at 90 beats/min. Multiple linear regression analysis showed that the aortic dicrotic notch pressure (AoDNP) and T were major determinants of IRP, while left ventricular end-diastolic pressure was a minor determinant. Multiple linear regression analysis, correlating T with IRP and AoDNP, did not further improve the correlation coefficient compared with that between T and IRP. We concluded that correction of the IRP by AoDNP is not necessary to predict T from additional multiple linear regression. The effects of ascending aortic constriction or angiotensin II on IRP were examined in five dogs, after pretreatment with propranolol. Aortic constriction caused a significant decrease in IRP and T, while angiotensin II produced a significant increase in IRP and T. IRP was affected by the change of afterload. However, the IRP and T values were always altered in the same direction. These results demonstrate that IRP is substituted for T and it reflects left ventricular relaxation even in different afterload conditions. We conclude that IRP is a simple parameter easily used to evaluate left ventricular relaxation in clinical situations.

  13. Improving Consensus Scoring of Crowdsourced Data Using the Rasch Model: Development and Refinement of a Diagnostic Instrument.

    PubMed

    Brady, Christopher John; Mudie, Lucy Iluka; Wang, Xueyang; Guallar, Eliseo; Friedman, David Steven

    2017-06-20

    Diabetic retinopathy (DR) is a leading cause of vision loss in working age individuals worldwide. While screening is effective and cost effective, it remains underutilized, and novel methods are needed to increase detection of DR. This clinical validation study compared diagnostic gradings of retinal fundus photographs provided by volunteers on the Amazon Mechanical Turk (AMT) crowdsourcing marketplace with expert-provided gold-standard grading and explored whether determination of the consensus of crowdsourced classifications could be improved beyond a simple majority vote (MV) using regression methods. The aim of our study was to determine whether regression methods could be used to improve the consensus grading of data collected by crowdsourcing. A total of 1200 retinal images of individuals with diabetes mellitus from the Messidor public dataset were posted to AMT. Eligible crowdsourcing workers had at least 500 previously approved tasks with an approval rating of 99% across their prior submitted work. A total of 10 workers were recruited to classify each image as normal or abnormal. If half or more workers judged the image to be abnormal, the MV consensus grade was recorded as abnormal. Rasch analysis was then used to calculate worker ability scores in a random 50% training set, which were then used as weights in a regression model in the remaining 50% test set to determine if a more accurate consensus could be devised. Outcomes of interest were the percent correctly classified images, sensitivity, specificity, and area under the receiver operating characteristic (AUROC) for the consensus grade as compared with the expert grading provided with the dataset. Using MV grading, the consensus was correct in 75.5% of images (906/1200), with 75.5% sensitivity, 75.5% specificity, and an AUROC of 0.75 (95% CI 0.73-0.78). A logistic regression model using Rasch-weighted individual scores generated an AUROC of 0.91 (95% CI 0.88-0.93) compared with 0.89 (95% CI 0.86-92) for a model using unweighted scores (chi-square P value<.001). Setting a diagnostic cut-point to optimize sensitivity at 90%, 77.5% (465/600) were graded correctly, with 90.3% sensitivity, 68.5% specificity, and an AUROC of 0.79 (95% CI 0.76-0.83). Crowdsourced interpretations of retinal images provide rapid and accurate results as compared with a gold-standard grading. Creating a logistic regression model using Rasch analysis to weight crowdsourced classifications by worker ability improves accuracy of aggregated grades as compared with simple majority vote. ©Christopher John Brady, Lucy Iluka Mudie, Xueyang Wang, Eliseo Guallar, David Steven Friedman. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 20.06.2017.

  14. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  15. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  16. Regression analysis of sparse asynchronous longitudinal data

    PubMed Central

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.

    2015-01-01

    Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699

  17. Associations between Verbal Learning Slope and Neuroimaging Markers across the Cognitive Aging Spectrum.

    PubMed

    Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L

    2015-07-01

    A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.

  18. Cognitive-Based Interventions to Improve Mobility: A Systematic Review and Meta-analysis.

    PubMed

    Marusic, Uros; Verghese, Joe; Mahoney, Jeannette R

    2018-06-01

    A strong relation between cognition and mobility has been identified in aging, supporting a role for enhancement mobility through cognitive-based interventions. However, a critical evaluation of the consistency of treatment effects of cognitive-based interventions is currently lacking. The objective of this study was 2-fold: (1) to review the existing literature on cognitive-based interventions aimed at improving mobility in older adults and (2) to assess the clinical effectiveness of cognitive interventions on gait performance. A systematic review of randomized controlled trials (RCT) of cognitive training interventions for improving simple (normal walking) and complex (dual task walking) gait was conducted in February 2018. Older adults without major cognitive, psychiatric, neurologic, and/or sensory impairments were included. Random effect meta-analyses and a subsequent meta-regression were performed to generate overall cognitive intervention effects on single- and dual-task walking conditions. Ten RCTs met inclusion criteria, with a total of 351 participants included in this meta-analysis. Cognitive training interventions revealed a small effect of intervention on complex gait [effect size (ES) = 0.47, 95% confidence interval (CI) 0.13 to 0.81, P = .007, I 2  = 15.85%], but not simple gait (ES = 0.35, 95% CI -0.01 to 0.71, P = .057, I 2  = 57.32%). Moreover, a meta-regression analysis revealed that intervention duration, training frequency, total number of sessions, and total minutes spent in intervention were not significant predictors of improvement in dual-task walking speed, though there was a suggestive trend toward a negative association between dual-task walking speed improvements and individual training session duration (P = .067). This meta-analysis provides support for the fact that cognitive training interventions can improve mobility-related outcomes, especially during challenging walking conditions requiring higher-order executive functions. Additional evidence from well-designed large-scale randomized clinical trials is warranted to confirm the observed effects. Copyright © 2018 AMDA – The Society for Post-Acute and Long-Term Care Medicine. All rights reserved.

  19. Simple and rapid analytical method for detection of amino acids in blood using blood spot on filter paper, fast-GC/MS and isotope dilution technique.

    PubMed

    Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji

    2010-11-15

    A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Sonographic Measurement of Fetal Ear Length in Turkish Women with a Normal Pregnancy

    PubMed Central

    Özdemir, Mucize Eriç; Uzun, Işıl; Karahasanoğlu, Ayşe; Aygün, Mehmet; Akın, Hale; Yazıcıoğlu, Fehmi

    2014-01-01

    Background: Abnormal fetal ear length is a feature of chromosomal disorders. Fetal ear length measurement is a simple measurement that can be obtained during ultrasonographic examinations. Aims: To develop a nomogram for fetal ear length measurements in our population and investigate the correlation between fetal ear length, gestational age, and other standard fetal biometric measurements. Study Design: Cohort study. Methods: Ear lengths of the fetuses were measured in normal singleton pregnancies. The relationship between gestational age and fetal ear length in millimetres was analysed by simple linear regression. In addition, the correlation of fetal ear length measurements with biparietal diameter, head circumference, abdominal circumference, and femur length were evaluated.Ear length measurements were obtained from fetuses in 389 normal singleton pregnancies ranging between 16 and 28 weeks of gestation. Results: A nomogram was developed by linear regression analysis of the parameters ear length and gestational age. Fetal ear length (mm) = y = (1.348 X gestational age)−12.265), where gestational ages is in weeks. A high correlation was found between fetal ear length and gestational age, and a significant correlation was also found between fetal ear length and the biparietal diameter (r=0.962; p<0.001). Similar correlations were found between fetal ear length and head circumference, and fetal ear length and femur length. Conclusion: The results of this study provide a nomogram for fetal ear length. The study also demonstrates the relationship between ear length and other biometric measurements. PMID:25667783

  1. Behavioral activation: Is it the expectation or achievement, of mastery or pleasure that contributes to improvement in depression?

    PubMed

    Furukawa, Toshi A; Imai, Hissei; Horikoshi, Masaru; Shimodera, Shinji; Hiroe, Takahiro; Funayama, Tadashi; Akechi, Tatsuo

    2018-06-06

    Behavioral activation (BA) is receiving renewed interest as a stand-alone or as a component of cognitive-behavior therapy (CBT) for depression. However, few studies have examined which aspects of BA are most contributory to its efficacy. This is a secondary analysis of a 9-week randomized controlled trial of smartphone CBT for patients with major depression. Depression severity was measured at baseline and at end of treatment by the Patient Health Questionnaire-9. All aspects of behavioral activation tasks that the participants had engaged in, including their expected mastery and pleasure and obtained mastery and pleasure, were recorded in the web server. We examined their contribution to improvement in depression as simple correlations and in stepwise multivariable linear regression. Among the 78 patients who completed at least one behavioral experiment, all aspects of expected or achieved mastery or pleasure correlated with change in depression severity. Discrepancy between the expectation and achievement, representing unexpected gain in mastery or pleasure, was not correlated. In stepwise regression, expected mastery and pleasure, especially the maximum level of the latter, emerged as the strongest contributing factors. The study is observational and cannot deduce cause-effect relationships. It may be the expected and continued sense of pleasure in planning activities that are most meaningful and rewarding to individuals, and not the simple level or amount of obtained pleasure, that contributes to the efficacy of BA. Copyright © 2018. Published by Elsevier B.V.

  2. Effect of walking velocity on ground reaction force variables in the hind limb of clinically normal horses.

    PubMed

    Khumsap, S; Clayton, H M; Lanovaz, J L

    2001-06-01

    To measure the effect of subject velocity on hind limb ground reaction force variables at the walk and to use the data to predict the force variables at different walking velocities in horses. 5 clinically normal horses. Kinematic and force data were collected simultaneously. Each horse was led over a force plate at a range of walking velocities. Stance duration and force data were recorded for the right hind limb. To avoid the effect of horse size on the outcome variables, the 8 force variables were standardized to body mass and height at the shoulders. Velocity was standardized to height at the shoulders and expressed as velocity in dimensionless units (VDU). Stance duration was also expressed in dimensionless units (SDU). Simple regression analysis was performed, using stance duration and force variables as dependent variables and VDU as the independent variable. Fifty-six trials were recorded with velocities ranging from 0.24 to 0.45 VDU (0.90 to 1.72 m/s). Simple regression models between measured variables and VDU were significant (R2 > 0.69) for SDU, first peak of vertical force, dip between the 2 vertical force peaks, vertical impulse, and timing of second peak of vertical force. Subject velocity affects vertical force components only. In the future, differences between the forces measured in lame horses and the expected forces calculated for the same velocity will be studied to determine whether the equations can be used as diagnostic criteria.

  3. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  4. Explaining the heterogeneous scrapie surveillance figures across Europe: a meta-regression approach.

    PubMed

    Del Rio Vilas, Victor J; Hopp, Petter; Nunes, Telmo; Ru, Giuseppe; Sivam, Kumar; Ortiz-Pelaez, Angel

    2007-06-28

    Two annual surveys, the abattoir and the fallen stock, monitor the presence of scrapie across Europe. A simple comparison between the prevalence estimates in different countries reveals that, in 2003, the abattoir survey appears to detect more scrapie in some countries. This is contrary to evidence suggesting the greater ability of the fallen stock survey to detect the disease. We applied meta-analysis techniques to study this apparent heterogeneity in the behaviour of the surveys across Europe. Furthermore, we conducted a meta-regression analysis to assess the effect of country-specific characteristics on the variability. We have chosen the odds ratios between the two surveys to inform the underlying relationship between them and to allow comparisons between the countries under the meta-regression framework. Baseline risks, those of the slaughtered populations across Europe, and country-specific covariates, available from the European Commission Report, were inputted in the model to explain the heterogeneity. Our results show the presence of significant heterogeneity in the odds ratios between countries and no reduction in the variability after adjustment for the different risks in the baseline populations. Three countries contributed the most to the overall heterogeneity: Germany, Ireland and The Netherlands. The inclusion of country-specific covariates did not, in general, reduce the variability except for one variable: the proportion of the total adult sheep population sampled as fallen stock by each country. A large residual heterogeneity remained in the model indicating the presence of substantial effect variability between countries. The meta-analysis approach was useful to assess the level of heterogeneity in the implementation of the surveys and to explore the reasons for the variation between countries.

  5. Non-destructive evaluation of chlorophyll content in quinoa and amaranth leaves by simple and multiple regression analysis of RGB image components.

    PubMed

    Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E

    2014-06-01

    Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.

  6. Nailfold capillaroscopy for prediction of novel future severe organ involvement in systemic sclerosis.

    PubMed

    Smith, Vanessa; Riccieri, Valeria; Pizzorni, Carmen; Decuman, Saskia; Deschepper, Ellen; Bonroy, Carolien; Sulli, Alberto; Piette, Yves; De Keyser, Filip; Cutolo, Maurizio

    2013-12-01

    Assessment of associations of nailfold videocapillaroscopy (NVC) scleroderma (systemic sclerosis; SSc) ("early," "active," and "late") with novel future severe clinical involvement in 2 independent cohorts. Sixty-six consecutive Belgian and 82 Italian patients with SSc underwent NVC at baseline. Images were blindly assessed and classified into normal, early, active, or late NVC pattern. Clinical evaluation was performed for 9 organ systems (general, peripheral vascular, skin, joint, muscle, gastrointestinal tract, lung, heart, and kidney) according to the Medsger disease severity scale (DSS) at baseline and in the future (18-24 months of followup). Severe clinical involvement was defined as category 2 to 4 per organ of the DSS. Logistic regression analysis (continuous NVC predictor variable) was performed. The OR to develop novel future severe organ involvement was stronger according to more severe NVC patterns and similar in both cohorts. In simple logistic regression analysis the OR in the Belgian/Italian cohort was 2.16 (95% CI 1.19-4.47, p = 0.010)/2.33 (95% CI 1.36-4.22, p = 0.002) for the early NVC SSc pattern, 4.68/5.42 for the active pattern, and 10.14/12.63 for the late pattern versus the normal pattern. In multiple logistic regression analysis, adjusting for disease duration, subset, and vasoactive medication, the OR was 2.99 (95% CI 1.31-8.82, p = 0.007)/1.88 (95% CI 1.00-3.71, p = 0.050) for the early NVC SSc pattern, 8.93/3.54 for the active pattern, and 26.69/6.66 for the late pattern versus the normal pattern. Capillaroscopy may be predictive of novel future severe organ involvement in SSc, as attested by 2 independent cohorts.

  7. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  8. Association between cardiovascular risk factors and carotid intima-media thickness in prepubertal Brazilian children.

    PubMed

    Gazolla, Fernanda Mussi; Neves Bordallo, Maria Alice; Madeira, Isabel Rey; de Miranda Carvalho, Cecilia Noronha; Vieira Monteiro, Alexandra Maria; Pinheiro Rodrigues, Nádia Cristina; Borges, Marcos Antonio; Collett-Solberg, Paulo Ferrez; Muniz, Bruna Moreira; de Oliveira, Cecilia Lacroix; Pinheiro, Suellen Martins; de Queiroz Ribeiro, Rebeca Mathias

    2015-05-01

    Early exposure to cardiovascular risk factors creates a chronic inflammatory state that could damage the endothelium followed by thickening of the carotid intima-media. To investigate the association of cardiovascular risk factors and thickening of the carotid intima. Media in prepubertal children. In this cross-sectional study, carotid intima-media thickness (cIMT) and cardiovascular risk factors were assessed in 129 prepubertal children aged from 5 to 10 year. Association was assessed by simple and multivariate logistic regression analyses. In simple logistic regression analyses, body mass index (BMI) z-score, waist circumference, and systolic blood pressure (SBP) were positively associated with increased left, right, and average cIMT, whereas diastolic blood pressure was positively associated only with increased left and average cIMT (p<0.05). In multivariate logistic regression analyses increased left cIMT was positively associated to BMI z-score and SBP, and increased average cIMT was only positively associated to SBP (p<0.05). BMI z-score and SBP were the strongest risk factors for increased cIMT.

  9. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  10. A phylogenetic Kalman filter for ancestral trait reconstruction using molecular data.

    PubMed

    Lartillot, Nicolas

    2014-02-15

    Correlation between life history or ecological traits and genomic features such as nucleotide or amino acid composition can be used for reconstructing the evolutionary history of the traits of interest along phylogenies. Thus far, however, such ancestral reconstructions have been done using simple linear regression approaches that do not account for phylogenetic inertia. These reconstructions could instead be seen as a genuine comparative regression problem, such as formalized by classical generalized least-square comparative methods, in which the trait of interest and the molecular predictor are represented as correlated Brownian characters coevolving along the phylogeny. Here, a Bayesian sampler is introduced, representing an alternative and more efficient algorithmic solution to this comparative regression problem, compared with currently existing generalized least-square approaches. Technically, ancestral trait reconstruction based on a molecular predictor is shown to be formally equivalent to a phylogenetic Kalman filter problem, for which backward and forward recursions are developed and implemented in the context of a Markov chain Monte Carlo sampler. The comparative regression method results in more accurate reconstructions and a more faithful representation of uncertainty, compared with simple linear regression. Application to the reconstruction of the evolution of optimal growth temperature in Archaea, using GC composition in ribosomal RNA stems and amino acid composition of a sample of protein-coding genes, confirms previous findings, in particular, pointing to a hyperthermophilic ancestor for the kingdom. The program is freely available at www.phylobayes.org.

  11. A simple and sensitive spectrofluorimetric method for analysis of some nitrofuran drugs in pharmaceutical preparations.

    PubMed

    Belal, Tarek Saied

    2008-09-01

    A simple, rapid, selective and sensitive spectrofluorimetric method was described for the analysis of three nitrofuran drugs, namely, nifuroxazide (NX), nitrofurantoin (NT) and nitrofurazone (NZ). The method involved the alkaline hydrolysis of the studied drugs by warming with 0.1 M sodium hydroxide solution then dilution with distilled water for NX or 2-propanol for NT and NZ. The formed fluorophores were measured at 465 nm (lambda (Ex) 265 nm), 458 nm (lambda (Ex) 245 nm) and 445 nm (lambda (Ex) 245 nm) for NX, NT and NZ, respectively. The reaction pathway was discussed and the structures of the fluorescent products were proposed. The different experimental parameters were studied and optimized. Regression analysis showed good correlation between fluorescence intensity and concentration over the ranges 0.08-1.00, 0.02-0.24 and 0.004-0.050 microg ml(-1) for NX, NT and NZ, respectively. The limits of detection of the method were 8.0, 1.9 and 0.3 ng ml(-1) for NX, NT and NZ, respectively. The proposed method was validated in terms of accuracy, precision and specificity, and it was successfully applied for the assay of the three nitrofurans in their different dosage forms. No interference was observed from common pharmaceutical adjuvants. The results were favorably compared with those obtained by reference spectrophotometric methods.

  12. Ratio of mean platelet volume to platelet count is a potential surrogate marker predicting liver cirrhosis.

    PubMed

    Iida, Hiroya; Kaibori, Masaki; Matsui, Kosuke; Ishizaki, Morihiko; Kon, Masanori

    2018-01-27

    To provide a simple surrogate marker predictive of liver cirrhosis (LC). Specimens from 302 patients who underwent resection for hepatocellular carcinoma between January 2006 and December 2012 were retrospectively analyzed. Based on pathologic findings, patients were divided into groups based on whether or not they had LC. Parameters associated with hepatic functional reserve were compared in these two groups using Mann-Whitney U -test for univariate analysis. Factors differing significantly in univariate analyses were entered into multivariate logistic regression analysis. There were significant differences between the LC group ( n = 100) and non-LC group ( n = 202) in prothrombin activity, concentrations of alanine aminotransferase, aspartate aminotransferase, total bilirubin, albumin, cholinesterase, type IV collagen, hyaluronic acid, indocyanine green retention rate at 15 min, maximal removal rate of technitium-99m diethylene triamine penta-acetic acid-galactosyl human serum albumin and ratio of mean platelet volume to platelet count (MPV/PLT). Multivariate analysis showed that prothrombin activity, concentrations of alanine aminotransferase, aspartate aminotransferase, total bilirubin and hyaluronic acid, and MPV/PLT ratio were factors independently predictive of LC. The area under the curve value for MPV/PLT was 0.78, with a 0.8 cutoff value having a sensitivity of 65% and a specificity of 78%. The MPV/PLT ratio, which can be determined simply from the complete blood count, may be a simple surrogate marker predicting LC.

  13. A new technique for spectrophotometric determination of pseudoephedrine and guaifenesin in syrup and synthetic mixture.

    PubMed

    Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam

    2011-05-01

    The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Forecasting paratransit services demand : review and recommendations.

    DOT National Transportation Integrated Search

    2013-06-01

    Travel demand forecasting tools for Floridas paratransit services are outdated, utilizing old national trip : generation rate generalities and simple linear regression models. In its guidance for the development of : mandated Transportation Disadv...

  15. Correlates of willingness to engage in residential gardening: implications for health optimization in ibadan, Nigeria.

    PubMed

    Motunrayo Ibrahim, Fausat

    2013-01-01

    Gardening is a worthwhile adventure which engenders health op-timization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gar-dening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05).  Socio-psychological characteristics and being married are very rele-vant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population.

  16. Correlates of Willingness to Engage in Residential Gardening: Implications for Health Optimization in Ibadan, Nigeria

    PubMed Central

    Motunrayo Ibrahim, Fausat

    2013-01-01

    Background: Gardening is a worthwhile adventure which engenders health op­timization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. Methods: In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gar­dening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05).  Conclusion: Socio-psychological characteristics and being married are very rele­vant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population. PMID:24688974

  17. Proposal of a Clinical Decision Tree Algorithm Using Factors Associated with Severe Dengue Infection.

    PubMed

    Tamibmaniam, Jayashamani; Hussin, Narwani; Cheah, Wee Kooi; Ng, Kee Sing; Muninathan, Prema

    2016-01-01

    WHO's new classification in 2009: dengue with or without warning signs and severe dengue, has necessitated large numbers of admissions to hospitals of dengue patients which in turn has been imposing a huge economical and physical burden on many hospitals around the globe, particularly South East Asia and Malaysia where the disease has seen a rapid surge in numbers in recent years. Lack of a simple tool to differentiate mild from life threatening infection has led to unnecessary hospitalization of dengue patients. We conducted a single-centre, retrospective study involving serologically confirmed dengue fever patients, admitted in a single ward, in Hospital Kuala Lumpur, Malaysia. Data was collected for 4 months from February to May 2014. Socio demography, co-morbidity, days of illness before admission, symptoms, warning signs, vital signs and laboratory result were all recorded. Descriptive statistics was tabulated and simple and multiple logistic regression analysis was done to determine significant risk factors associated with severe dengue. 657 patients with confirmed dengue were analysed, of which 59 (9.0%) had severe dengue. Overall, the commonest warning sign were vomiting (36.1%) and abdominal pain (32.1%). Previous co-morbid, vomiting, diarrhoea, pleural effusion, low systolic blood pressure, high haematocrit, low albumin and high urea were found as significant risk factors for severe dengue using simple logistic regression. However the significant risk factors for severe dengue with multiple logistic regressions were only vomiting, pleural effusion, and low systolic blood pressure. Using those 3 risk factors, we plotted an algorithm for predicting severe dengue. When compared to the classification of severe dengue based on the WHO criteria, the decision tree algorithm had a sensitivity of 0.81, specificity of 0.54, positive predictive value of 0.16 and negative predictive of 0.96. The decision tree algorithm proposed in this study showed high sensitivity and NPV in predicting patients with severe dengue that may warrant admission. This tool upon further validation study can be used to help clinicians decide on further managing a patient upon first encounter. It also will have a substantial impact on health resources as low risk patients can be managed as outpatients hence reserving the scarce hospital beds and medical resources for other patients in need.

  18. Periodontal inflamed surface area as a novel numerical variable describing periodontal conditions

    PubMed Central

    2017-01-01

    Purpose A novel index, the periodontal inflamed surface area (PISA), represents the sum of the periodontal pocket depth of bleeding on probing (BOP)-positive sites. In the present study, we evaluated correlations between PISA and periodontal classifications, and examined PISA as an index integrating the discrete conventional periodontal indexes. Methods This study was a cross-sectional subgroup analysis of data from a prospective cohort study investigating the association between chronic periodontitis and the clinical features of ankylosing spondylitis. Data from 84 patients without systemic diseases (the control group in the previous study) were analyzed in the present study. Results PISA values were positively correlated with conventional periodontal classifications (Spearman correlation coefficient=0.52; P<0.01) and with periodontal indexes, such as BOP and the plaque index (PI) (r=0.94; P<0.01 and r=0.60; P<0.01, respectively; Pearson correlation test). Porphyromonas gingivalis (P. gingivalis) expression and the presence of serum P. gingivalis antibodies were significant factors affecting PISA values in a simple linear regression analysis, together with periodontal classification, PI, bleeding index, and smoking, but not in the multivariate analysis. In the multivariate linear regression analysis, PISA values were positively correlated with the quantity of current smoking, PI, and severity of periodontal disease. Conclusions PISA integrates multiple periodontal indexes, such as probing pocket depth, BOP, and PI into a numerical variable. PISA is advantageous for quantifying periodontal inflammation and plaque accumulation. PMID:29093989

  19. Body composition of collegiate football players: bioelectrical impedance and skinfolds compared to hydrostatic weighing.

    PubMed

    Oppliger, R A; Nielsen, D H; Shetler, A C; Crowley, E T; Albright, J P

    1992-01-01

    The need for simple, valid techniques of body composition assessment among athletes is a growing concern of the physical therapist. This paper reports on several common methods applied to university football players. Body composition analysis was conducted on 28 Division IA football players using three different bioelectrical impedance analysis (BIA) systems, skinfolds (SF), and hydrostatic weighing (HYDRO). Correlations for all methods with HYDRO were high (>.88), but BIA significantly overpredicted body fatness. In contrast, three SF equations showed small differences with HYDRO and reasonable measurement error. Clinicians should exercise caution when using BIA based on the existing manufacturers' equations with athletic populations. Adjustments to BIA regression equations by including modifying or anthropometric variables could enhance the predictive accuracy of these methods with lean, athletic males. J Orthop Sports Phys Ther 1992;15(4):187-192.

  20. Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  1. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  2. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  3. Bayesian propensity scores for high-dimensional causal inference: A comparison of drug-eluting to bare-metal coronary stents.

    PubMed

    Spertus, Jacob V; Normand, Sharon-Lise T

    2018-04-23

    High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Effects of alloy composition on cyclic flame hot-corrosion attack of cast nickel-base superalloys at 900 deg C

    NASA Technical Reports Server (NTRS)

    Deadmore, D. L.

    1984-01-01

    The effects of Cr, Al, Ti, Mo, Ta, Nb, and W content on the hot corrosion of nickel base alloys were investigated. The alloys were tested in a Mach 0.3 flame with 0.5 ppmw sodium at a temperature of 900 C. One nondestructive and three destructive tests were conducted. The best corrosion resistance was achieved when the Cr content was 12 wt %. However, some lower-Cr-content alloys ( 10 wt%) exhibited reasonable resistance provided that the Al content alloys ( 10 wt %) exhibited reasonable resistance provided that the Al content was 2.5 wt % and the Ti content was Aa wt %. The effect of W, Ta, Mo, and Nb contents on the hot-corrosion resistance varied depending on the Al and Ti contents. Several commercial alloy compositions were also tested and the corrosion attack was measured. Predicted attack was calculated for these alloys from derived regression equations and was in reasonable agreement with that experimentally measured. The regression equations were derived from measurements made on alloys in a one-quarter replicate of a 2(7) statistical design alloy composition experiment. These regression equations represent a simple linear model and are only a very preliminary analysis of the data needed to provide insights into the experimental method.

  5. Bayesian isotonic density regression

    PubMed Central

    Wang, Lianming; Dunson, David B.

    2011-01-01

    Density regression models allow the conditional distribution of the response given predictors to change flexibly over the predictor space. Such models are much more flexible than nonparametric mean regression models with nonparametric residual distributions, and are well supported in many applications. A rich variety of Bayesian methods have been proposed for density regression, but it is not clear whether such priors have full support so that any true data-generating model can be accurately approximated. This article develops a new class of density regression models that incorporate stochastic-ordering constraints which are natural when a response tends to increase or decrease monotonely with a predictor. Theory is developed showing large support. Methods are developed for hypothesis testing, with posterior computation relying on a simple Gibbs sampler. Frequentist properties are illustrated in a simulation study, and an epidemiology application is considered. PMID:22822259

  6. Postoperative liver volume was accurately predicted by a medical image three dimensional visualization system in hepatectomy for liver cancer.

    PubMed

    Cai, Wei; Fan, Yingfang; Hu, Haoyu; Xiang, Nan; Fang, Chihua; Jia, Fucang

    2017-06-01

    Liver cancer is the second most common cause of cancer death worldwide. The hepatectomy is the most effective and the only potentially curative treatment for patients with resectable neoplasm. Precisely preoperative assessment of remnant liver volume is essential in preventing postoperative liver failure. The aim of our study is to report our experience of using a medical image three dimensional (3D) visualization system (MI-3DVS), which was developed by our team, in assisting hepatectomy for patients with liver cancer. Between January 2010 and June 2016, 69 patients with liver cancer underwent hepatic resection based on the MI-3DVS were enrolled in this study. All patients underwent CT scan 5 days before the surgery and within 5 days after resection. CT images were reconstructed with the MI-3DVS to assist to perform hepatectomy. Simple linear regression, intra-class correlation coefficient (ICC) and Bland-Altman analysis were used to evaluate the relationship and agreement between actual excisional liver volume (AELV) and predicted excisional liver volume (PELV). Among 69 patients in this study, 62(89.85%) of them were diagnosed with hepatocellular carcinoma by histopathologic examination, and 41(59.42%) underwent major hepatectomy. The average AELV was 330.13 cm 3 and the average PELV was 287.67 cm 3 . The simple regression equation is AELV = 1.016 × PELV+30.39(r = 0.966; p < 0.0003). PELV (ICC = 0.964) achieved an excellent agreement with AELV with statistical significance (p < 0.001). 65 of 69 dots are in the range of 95% confidence interval in Bland-Altman analyses. The MI-3DVS has advantages of simple usage and convenient hold. It is accurate in assessment of postoperative liver volume and improve safety in liver resection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Cuttability Assessment of Selected Rocks Through Different Brittleness Values

    NASA Astrophysics Data System (ADS)

    Dursun, Arif Emre; Gokay, M. Kemal

    2016-04-01

    Prediction of cuttability is a critical issue for successful execution of tunnel or mining excavation projects. Rock cuttability is also used to determine specific energy, which is defined as the work done by the cutting force to excavate a unit volume of yield. Specific energy is a meaningful inverse measure of cutting efficiency, since it simply states how much energy must be expended to excavate a unit volume of rock. Brittleness is a fundamental rock property and applied in drilling and rock excavation. Brittleness is one of the most crucial rock features for rock excavation. For this reason, determination of relations between cuttability and brittleness will help rock engineers. This study aims to estimate the specific energy from different brittleness values of rocks by means of simple and multiple regression analyses. In this study, rock cutting, rock property, and brittleness index tests were carried out on 24 different rock samples with different strength values, including marble, travertine, and tuff, collected from sites around Konya Province, Turkey. Four previously used brittleness concepts were evaluated in this study, denoted as B 1 (ratio of compressive to tensile strength), B 2 (ratio of the difference between compressive and tensile strength to the sum of compressive and tensile strength), B 3 (area under the stress-strain line in relation to compressive and tensile strength), and B 9 = S 20, the percentage of fines (<11.2 mm) formed in an impact test for the Norwegian University of Science and Technology (NTNU) model as well as B 9p (B 9 as predicted from uniaxial compressive, Brazilian tensile, and point load strengths of rocks using multiple regression analysis). The results suggest that the proposed simple regression-based prediction models including B 3, B 9, and B 9p outperform the other models including B 1 and B 2 and can be used for more accurate and reliable estimation of specific energy.

  8. Viscoelastic Parameters for Quantifying Liver Fibrosis: Three-Dimensional Multifrequency MR Elastography Study on Thin Liver Rat Slices

    PubMed Central

    Ronot, Maxime; Lambert, Simon A.; Wagner, Mathilde; Garteiser, Philippe; Doblas, Sabrina; Albuquerque, Miguel; Paradis, Valérie; Vilgrain, Valérie; Sinkus, Ralph; Van Beers, Bernard E.

    2014-01-01

    Objective To assess in a high-resolution model of thin liver rat slices which viscoelastic parameter at three-dimensional multifrequency MR elastography has the best diagnostic performance for quantifying liver fibrosis. Materials and Methods The study was approved by the ethics committee for animal care of our institution. Eight normal rats and 42 rats with carbon tetrachloride induced liver fibrosis were used in the study. The rats were sacrificed, their livers were resected and three-dimensional MR elastography of 5±2 mm liver slices was performed at 7T with mechanical frequencies of 500, 600 and 700 Hz. The complex shear, storage and loss moduli, and the coefficient of the frequency power law were calculated. At histopathology, fibrosis and inflammation were assessed with METAVIR score, fibrosis was further quantified with morphometry. The diagnostic value of the viscoelastic parameters for assessing fibrosis severity was evaluated with simple and multiple linear regressions, receiver operating characteristic analysis and Obuchowski measures. Results At simple regression, the shear, storage and loss moduli were associated with the severity of fibrosis. At multiple regression, the storage modulus at 600 Hz was the only parameter associated with fibrosis severity (r = 0.86, p<0.0001). This parameter had an Obuchowski measure of 0.89+/−0.03. This measure was significantly larger than that of the loss modulus (0.78+/−0.04, p = 0.028), but not than that of the complex shear modulus (0.88+/−0.03, p = 0.84). Conclusion Our high resolution, three-dimensional multifrequency MR elastography study of thin liver slices shows that the storage modulus is the viscoelastic parameter that has the best association with the severity of liver fibrosis. However, its diagnostic performance does not differ significantly from that of the complex shear modulus. PMID:24722733

  9. Quasi-biennial (QBO), annual (AO), and semi-annual oscillation (SAO) in stratospheric SCIAMACHY O3, NO2, and BrO limb data using a multivariate least squares approach

    NASA Astrophysics Data System (ADS)

    Dikty, Sebastian; von Savigny, Christian; Sinnhuber, Bjoern-Martin; Rozanov, Alexej; Weber, Mark; Burrows, John P.

    We use SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartog-raphY) ozone, nitrogen dioxide and bromine oxide profiles (20-50 km altitude, 2003-2008) to quantify the amplitudes of QBO, AO, and SAO signals with the help of a simple multivariate regression model. The analysis is being carried out with SCIAMACHY data covering all lat-itudes with the exception of polar nights, when measurements are not available. The overall global yield is approximately 10,000 profiles per month, which are binned into 10-steps with one zonal mean profile being calculated per day and per latitude bin.

  10. Dual oxidase 1: A predictive tool for the prognosis of hepatocellular carcinoma patients.

    PubMed

    Chen, Shengsen; Ling, Qingxia; Yu, Kangkang; Huang, Chong; Li, Ning; Zheng, Jianming; Bao, Suxia; Cheng, Qi; Zhu, Mengqi; Chen, Mingquan

    2016-06-01

    Dual oxidase 1 (DUOX1), which is the main source of reactive oxygen species (ROS) production in the airway, can be silenced in human lung cancer and hepatocellular carcinomas. However, the prognostic value of DUOX1 expression in hepatocellular carcinoma patients is still unclear. We investigated the prognostic value of DUOX1 expression in liver cancer patients. DUOX1 mRNA expression was determined in tumor tissues and non-tumor tissues by real‑time PCR. For evaluation of the prognostic value of DUOX1 expression, Kaplan-Meier method and Cox's proportional hazards model (univariate analysis and multivariate analysis) were employed. A simple risk score was devised by using significant variables obtained from the Cox's regression analysis to further predict the HCC patient prognosis. We observed a reduced DUOX1 mRNA level in the cancer tissues in comparison to the non‑cancer tissues. More importantly, Kaplan-Meier analysis showed that patients with high DUOX1 expression had longer disease-free survival and overall survival compared with those with low expression of DUOX1. Cox's regression analysis indicated that DUOX1 expression, age, and intrahepatic metastasis may be significant prognostic factors for disease-free survival and overall survival. Finally, we found that patients with total scores of >2 and >1 were more likely to relapse and succumb to the disease than patients whose total scores were ≤2 and ≤1. In conclusion, DUOX1 expression in liver tumors is a potential prognostic tool for patients. The risk scoring system is useful for predicting the survival of liver cancer patients after tumor resection.

  11. Rapid determination of Swiss cheese composition by Fourier transform infrared/attenuated total reflectance spectroscopy.

    PubMed

    Rodriguez-Saona, L E; Koca, N; Harper, W J; Alvarez, V B

    2006-05-01

    There is a need for rapid and simple techniques that can be used to predict the quality of cheese. The aim of this research was to develop a simple and rapid screening tool for monitoring Swiss cheese composition by using Fourier transform infrared spectroscopy. Twenty Swiss cheese samples from different manufacturers and degree of maturity were evaluated. Direct measurements of Swiss cheese slices (approximately 0.5 g) were made using a MIRacle 3-reflection diamond attenuated total reflectance (ATR) accessory. Reference methods for moisture (vacuum oven), protein content (Kjeldahl), and fat (Babcock) were used. Calibration models were developed based on a cross-validated (leave-one-out approach) partial least squares regression. The information-rich infrared spectral range for Swiss cheese samples was from 3,000 to 2,800 cm(-1) and 1,800 to 900 cm(-1). The performance statistics for cross-validated models gave estimates for standard error of cross-validation of 0.45, 0.25, and 0.21% for moisture, protein, and fat respectively, and correlation coefficients r > 0.96. Furthermore, the ATR infrared protocol allowed for the classification of cheeses according to manufacturer and aging based on unique spectral information, especially of carbonyl groups, probably due to their distinctive lipid composition. Attenuated total reflectance infrared spectroscopy allowed for the rapid (approximately 3-min analysis time) and accurate analysis of the composition of Swiss cheese. This technique could contribute to the development of simple and rapid protocols for monitoring complex biochemical changes, and predicting the final quality of the cheese.

  12. A simple next-best alternative to seasonal predictions in Europe

    NASA Astrophysics Data System (ADS)

    Buontempo, Carlo; De Felice, Matteo

    2016-04-01

    In order to build a climate proof society, we need to learn how to best use the climate information we have. Having spent time and resources in developing complex numerical models has often blinded us on the value some of this information really has in the eyes of a decision maker. An effective way to assess this is to check the quality of the forecast (and its cost) to the quality of the forecast from a prediction system based on simpler assumption (and thus cheaper to run). Such a practice is common in marketing analysis where it is often referred to as the next-best alternative. As a way to facilitate such an analysis, climate service providers should always provide alongside the predictions a set of skill scores. These are usually based on climatological means, anomaly persistence or more recently multiple linear regressions. We here present an equally simple benchmark based on a Markov chain process locally trained at a monthly or seasonal time-scale. We demonstrate that in spite of its simplicity the model easily outperforms not only the standard benchmark but also most of the seasonal predictions system at least in EUROPE. We suggest that a benchmark of this kind could represent a useful next-best alternative for a number of users.

  13. Weighing Evidence “Steampunk” Style via the Meta-Analyser

    PubMed Central

    Bowden, Jack; Jackson, Chris

    2016-01-01

    ABSTRACT The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression. PMID:28003684

  14. Sedimentary sequence evolution in a Foredeep basin: Eastern Venezuela

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bejarano, C.; Funes, D.; Sarzalho, S.

    1996-08-01

    Well log-seismic sequence stratigraphy analysis in the Eastern Venezuela Foreland Basin leads to study of the evolution of sedimentary sequences onto the Cretaceous-Paleocene passive margin. This basin comprises two different foredeep sub-basins: The Guarico subbasin to the west, older, and the Maturin sub-basin to the east, younger. A foredeep switching between these two sub-basins is observed at 12.5 m.y. Seismic interpretation and well log sections across the study area show sedimentary sequences with transgressive sands and coastal onlaps to the east-southeast for the Guarico sub-basin, as well as truncations below the switching sequence (12.5 m.y.), and the Maturin sub-basin showsmore » apparent coastal onlaps to the west-northwest, as well as a marine onlap (deeper water) in the west, where it starts to establish. Sequence stratigraphy analysis of these sequences with well logs allowed the study of the evolution of stratigraphic section from Paleocene to middle Miocene (68.0-12.0 m.y.). On the basis of well log patterns, the sequences were divided in regressive-transgressive-regressive sedimentary cycles caused by changes in relative sea level. Facies distributions were analyzed and the sequences were divided into simple sequences or sub- sequences of a greater frequencies than third order depositional sequences.« less

  15. QC operator’s nonneutral posture against musculoskeletal disorder’s (MSDs) risks

    NASA Astrophysics Data System (ADS)

    Kautsar, F.; Gustopo, D.; Achmadi, F.

    2018-04-01

    Musculoskeletal disorders refer to a gamut of inflammatory and degenerative disorders aggravated largely by the performance of work. It is the major cause of pain, disability, absenteeism and reduced productivity among workers worldwide. Although it is not fatal, MSDs have the potential to develop into serious injuries in the musculoskeletal system if ignored. QC operators work in nonneutral body posture. This cross-sectional study was condusted in order to investigate correlation between risk assessment results of QEC and body posture calculation of mannequin pro. Statistical analysis was condusted using SPSS version 16.0. Validity test, Reliability test and Regression analysis were conducted to compare the risk assessment output of applied method and nonneutral body posture simulation. All of QEC’s indicator classified as valid and reliable. The result of simple regression anlysis are back (0.326<4.32), shoulder/arm (8.489>4.32), wrist/hand (4.86 >4.32) and neck (1.298 <4.32). Result of this study shows that there is an influence between nonneutral body posture of the QC operator during work with risk of musculoskeletal disorders. The potential risk of musculoskeletal disorders is in the shoulder/arm and wrist/hand of the QC operator, whereas the back and neck are not affected.

  16. Preferential reduction of bone mineral density at the femur reflects impairment of physical activity in patients with low-activity rheumatoid arthritis.

    PubMed

    Sugiguchi, Shigeru; Goto, Hitoshi; Inaba, Masaaki; Nishizawa, Yoshiki

    2010-02-01

    Bone mineral density (BMD) and factors influencing BMD in rheumatoid arthritis (RA) under good or moderate control were examined to assess management of osteoporosis in RA. BMD of the lumbar spine, femur, and distal radius was measured in 105 female patients with well-controlled RA. Laboratory and clinical variables associated with disease activity were measured in the same subjects, and correlations between these variables and BMD were evaluated. The RA patients showed a greater decrease in BMD of the femoral neck than of the lumbar spine. Age, Health Assessment Questionnaire (HAQ) score, and Larsen damage score had negative correlations with BMD of the femoral neck. In multiple regression analysis of the parameters associated with BMD of the femoral neck in simple regression analysis, an increase in HAQ score showed a negative correlation with BMD of the femoral neck. After initiation of treatment with alendronate (ALN), BMD of the femoral neck increased and correlated with improvement in HAQ score. A decrease in BMD of the femoral neck is a characteristic of RA. This suggests that muscle tonus has more effect than weight-bearing activity on BMD in patients with RA. BMD of the femoral neck is a useful index for general evaluation of RA patients.

  17. Social Impact of Stigma Regarding Tuberculosis Hindering Adherence to Treatment: A Cross Sectional Study Involving Tuberculosis Patients in Rajshahi City, Bangladesh.

    PubMed

    Chowdhury, Md Rocky Khan; Rahman, Md Shafiur; Mondal, Md Nazrul Islam; Sayem, Abu; Billah, Baki

    2015-01-01

    Stigma, considered a social disease, is more apparent in developing societies which are driven by various social affairs, and influences adherence to treatment. The aim of the present study was to examine levels of social stigma related to tuberculosis (TB) in sociodemographic context and identify the effects of sociodemographic factors on stigma. The study sample consisted of 372 TB patients. Data were collected using stratified sampling with simple random sampling techniques. T tests, chi-square tests, and binary logistic regression analysis were performed to examine correlations between stigma and sociodemographic variables. Approximately 85.9% of patients had experienced stigma. The most frequent indicator of the stigma experienced by patients involved problems taking part in social programs (79.5%). Mean levels of stigma were significantly higher in women (55.5%), illiterate individuals (60.8%), and villagers (60.8%) relative to those of other groups. Chi-square tests revealed that education, monthly family income, and type of patient (pulmonary and extrapulmonary) were significantly associated with stigma. Binary logistic regression analysis demonstrated that stigma was influenced by sex, education, and type of patient. Stigma is one of the most important barriers to treatment adherence. Therefore, in interventions that aim to reduce stigma, strong collaboration between various institutions is essential.

  18. Exploring visuospatial abilities and their contribution to constructional abilities and nonverbal intelligence.

    PubMed

    Trojano, Luigi; Siciliano, Mattia; Cristinzio, Chiara; Grossi, Dario

    2018-01-01

    The present study aimed at exploring relationships among the visuospatial tasks included in the Battery for Visuospatial Abilities (BVA), and at assessing the relative contribution of different facets of visuospatial processing on tests tapping constructional abilities and nonverbal abstract reasoning. One hundred forty-four healthy subjects with a normal score on Mini Mental State Examination completed the BVA plus Raven's Coloured Progressive Matrices and Constructional Apraxia test. We used Principal Axis Factoring and Parallel Analysis to investigate relationships among the BVA visuospatial tasks, and performed regression analyses to assess the visuospatial contribution to constructional abilities and nonverbal abstract reasoning. Principal Axis Factoring and Parallel Analysis revealed two eigenvalues exceeding 1, accounting for about 60% of the variance. A 2-factor model provided the best fit. Factor 1 included sub-tests exploring "complex" visuospatial skills, whereas Factor 2 included two subtests tapping "simple" visuospatial skills. Regression analyses revealed that both Factor 1 and Factor 2 significantly affected performance on Raven's Coloured Progressive Matrices, whereas only the Factor 1 affected performance on Constructional Apraxia test. Our results supported functional segregation proposed by De Renzi, suggesting clinical caution to utilize a single test to assess visuospatial domain, and qualified the visuospatial contribution in drawing and non-verbal intelligence test.

  19. IQ is an independent predictor of glycated haemoglobin level in young and middle-aged adults with intellectual disability.

    PubMed

    Yano, T; Miki, T; Itoh, T; Ohnishi, H; Asari, M; Chihiro, S; Yamamoto, A; Aotsuka, K; Kawakami, N; Ichikawa, J; Hirota, Y; Miura, T

    2015-01-01

    Here we examined whether intellectual disability is independently associated with hyperglycaemia. We recruited 233 consecutive young and middle-aged adults with intellectual disability. After exclusion of subjects on medication for metabolic diseases or with severe intellectual disability (IQ < 35), 121 subjects were divided by IQ into a group with moderate intellectual disability (35 ≤ IQ ≤ 50), a mild intellectual disability group (51 ≤ IQ ≤ 70) and a borderline group (IQ > 70). HbA1c level was higher in subjects with moderate intellectual disability (42 ± 9 mmol/mol; 6.0 ± 0.8%) than those in the borderline group (36 ± 4 mmol/mol; 5.5 ± 0.3%) and mild intellectual disability group (37 ± 5 mmol/mol; 5.5 ± 0.5%) groups. HbA1c level was correlated with age, BMI, blood pressure, serum triglycerides and IQ in simple linear regression analysis. Multiple regression analysis indicated that IQ, age, BMI and diastolic blood pressure were independent explanatory factors of HbA1c level. An unfavourable effect of intellectual disability on lifestyle and untoward effect of hyperglycaemia on cognitive function may underlie the association of low IQ with hyperglycaemia. © 2014 The Authors. Diabetic Medicine © 2014 Diabetes UK.

  20. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Inverse odds ratio-weighted estimation for causal mediation analysis.

    PubMed

    Tchetgen Tchetgen, Eric J

    2013-11-20

    An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Estimates of Ground Temperature and Atmospheric Moisture from CERES Observations

    NASA Technical Reports Server (NTRS)

    Wu, Man Li C.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A method is developed to retrieve surface ground temperature (Tg) and atmospheric moisture using clear sky fluxes (CSF) from CERES-TRMM observations. In general, the clear sky outgoing long-wave radiation (CLR) is sensitive to upper level moisture (q(sub h)) over wet regions and Tg over dry regions The clear sky window flux from 800 to 1200 /cm (RadWn) is sensitive to low level moisture (q(sub j)) and Tg. Combining these two measurements (CLR and RadWn), Tg and q(sub h) can be estimated over land, while q(sub h) and q(sub t) can be estimated over the oceans. The approach capitalizes on the availability of satellite estimates of CLR and RadWn and other auxiliary satellite data. The basic methodology employs off-line forward radiative transfer calculations to generate synthetic CSF data from two different global 4-dimensional data assimilation products. Simple linear regression is used to relate discrepancies in CSF to discrepancies in Tg, q(sub h) and q(sub t). The slopes of the regression lines define sensitivity parameters that can be exploited to help interpret mismatches between satellite observations and model-based estimates of CSF. For illustration, we analyze the discrepancies in the CSF between an early implementation of the Goddard Earth Observing System Data Assimilation System (GEOS-DAS) and a recent operational version of the European Center for Medium-Range Weather Prediction data assimilation system. In particular, our analysis of synthetic total and window region SCF differences (computed from two different assimilated data sets) shows that simple linear regression employing (Delta)Tg and broad layer (Delta)q(sub l) from 500 hPa to surface and (Delta)q(sub h) from 200 to 500 hPa provides a good approximation to the full radiative transfer calculations, typically explaining more than 90% of the 6-hourly variance in the flux differences. These simple regression relations can be inverted to "retrieve" the errors in the geophysical parameters. Uncertainties (normalized by standard deviation) in the monthly mean retrieved parameters range from 7% for (Delta)T to about 20% for (Delta)q(sub t). Our initial application of the methodology employed an early CERES-TRMM data set (CLR and Radwn) to assess the quality of the GEOS2 data. The results showed that over the tropical and subtropical oceans GEOS2 is, in general, too wet in the upper troposphere (mean bias of 0.99 mm) and too dry in the lower troposphere (mean bias of -4.7 mm). We note that these errors, as well as a cold bias in the Tg, have largely been corrected in the current version of GEOS-2 with the introduction of a land surface model, a moist turbulence scheme and the assimilation of SSTM/I total precipitable water.

  3. Comparison of the Relationship between Women' Empowerment and Fertility between Single-child and Multi-child Families

    PubMed Central

    Saberi, Tahereh; Ehsanpour, Soheila; Mahaki, Behzad; Kohan, Shahnaz

    2018-01-01

    Background: The reduction in fertility and increase in the number of single-child families in Iran will result in an increased risk of population aging. One of the factors affecting fertility is women's empowerment. This study aimed to evaluate the relationship between women's empowerment and fertility in single-child and multi-child families. Materials and Methods: This case-control study was conducted among 350 women (120 who had only 1 child as case group and 230 who had 2 or more children as control group) of 15–49 years of age in Isfahan, Iran, in 2016. For data collection, a 2-part questionnaire was designed. Data were analyzed using independent t-test, Chi-square test, and logistic regression analysis. Results: The difference between average scores of women's empowerment in the case group 54.08 (9.88) and control group 51.47 (8.57) was significant (p = 0.002). Simple logistic regression analysis showed that under diploma education, compared to postgraduate education, (OR = 0.21, p = 0.001) and being a housewife, compared to being employed, (OR = 0.45, p = 0.004) decreased the odds of having only 1 child. Multiple logistic regression results showed that the relationship between women's empowerment and fertility was not significant (p = 0.265). Conclusions: Although women in single-child families were more empowered, this was not the main reason for their preference to have only 1 child. In fact, educated and employed women postpone marriage and childbearing and limit fertility to only 1 child despite their desire. PMID:29628961

  4. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    PubMed

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  6. Landslide susceptibility mapping using frequency ratio, logistic regression, artificial neural networks and their comparison: A case study from Kat landslides (Tokat—Turkey)

    NASA Astrophysics Data System (ADS)

    Yilmaz, Işık

    2009-06-01

    The purpose of this study is to compare the landslide susceptibility mapping methods of frequency ratio (FR), logistic regression and artificial neural networks (ANN) applied in the Kat County (Tokat—Turkey). Digital elevation model (DEM) was first constructed using GIS software. Landslide-related factors such as geology, faults, drainage system, topographical elevation, slope angle, slope aspect, topographic wetness index (TWI) and stream power index (SPI) were used in the landslide susceptibility analyses. Landslide susceptibility maps were produced from the frequency ratio, logistic regression and neural networks models, and they were then compared by means of their validations. The higher accuracies of the susceptibility maps for all three models were obtained from the comparison of the landslide susceptibility maps with the known landslide locations. However, respective area under curve (AUC) values of 0.826, 0.842 and 0.852 for frequency ratio, logistic regression and artificial neural networks showed that the map obtained from ANN model is more accurate than the other models, accuracies of all models can be evaluated relatively similar. The results obtained in this study also showed that the frequency ratio model can be used as a simple tool in assessment of landslide susceptibility when a sufficient number of data were obtained. Input process, calculations and output process are very simple and can be readily understood in the frequency ratio model, however logistic regression and neural networks require the conversion of data to ASCII or other formats. Moreover, it is also very hard to process the large amount of data in the statistical package.

  7. Green Building Implementation at Schools in North Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Harimu, D. A. J.; Tumanduk, M. S. S. S.

    2018-02-01

    This research aims at investigating the green building implementation at schools in North Sulawesi, Indonesia; and to analysis the relationship between implementation of green building concept at school with students’ green behaviour. This research is Survey Research with quantitative descriptive method. The analysis unit is taken purposively, that is school that had been implemented the green building concept, Manado’s 3rd Public Vocational High School, Lokon High School at Tomohon, Manado Independent School at North Minahasa, and Tondano’s 3rd Public Vocational High School. Data collecting is acquired by observation and questionnaire. The Assessment Criteria of green building on Analysis Unit, is taken from Greenship Existing Building ver 1. There are 4 main points that being assessed, which are Energy Conservation and Efficiency; Water Conservation; Indoor Health and Comfort; Waste Managerial. The Analysis technique used in this research is the simple regression analysis. The result of the research shows that there is a significant relation between green building implementation at school and students’ green behavior. The result is accordance with the Gesalts Psychologist theories, that architecture can change the user’s behaviour.

  8. Predicting Diameter at Breast Height from Stump Diameters for Northeastern Tree Species

    Treesearch

    Eric H. Wharton; Eric H. Wharton

    1984-01-01

    Presents equations to predict diameter at breast height from stump diameter measurements for 17 northeastern tree species. Simple linear regression was used to develop the equations. Application of the equations is discussed.

  9. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  10. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  11. Potential pitfalls when denoising resting state fMRI data using nuisance regression.

    PubMed

    Bright, Molly G; Tench, Christopher R; Murphy, Kevin

    2017-07-01

    In resting state fMRI, it is necessary to remove signal variance associated with noise sources, leaving cleaned fMRI time-series that more accurately reflect the underlying intrinsic brain fluctuations of interest. This is commonly achieved through nuisance regression, in which the fit is calculated of a noise model of head motion and physiological processes to the fMRI data in a General Linear Model, and the "cleaned" residuals of this fit are used in further analysis. We examine the statistical assumptions and requirements of the General Linear Model, and whether these are met during nuisance regression of resting state fMRI data. Using toy examples and real data we show how pre-whitening, temporal filtering and temporal shifting of regressors impact model fit. Based on our own observations, existing literature, and statistical theory, we make the following recommendations when employing nuisance regression: pre-whitening should be applied to achieve valid statistical inference of the noise model fit parameters; temporal filtering should be incorporated into the noise model to best account for changes in degrees of freedom; temporal shifting of regressors, although merited, should be achieved via optimisation and validation of a single temporal shift. We encourage all readers to make simple, practical changes to their fMRI denoising pipeline, and to regularly assess the appropriateness of the noise model used. By negotiating the potential pitfalls described in this paper, and by clearly reporting the details of nuisance regression in future manuscripts, we hope that the field will achieve more accurate and precise noise models for cleaning the resting state fMRI time-series. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Estimation of sex and stature using anthropometry of the upper extremity in an Australian population.

    PubMed

    Howley, Donna; Howley, Peter; Oxenham, Marc F

    2018-06-01

    Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. A simple method for detection of changes in relations between solute concentration and stream discharge

    NASA Astrophysics Data System (ADS)

    Huntington, T. G.; Shanley, J. B.

    2015-12-01

    The relation between constituent concentrations and stream discharge (C/Q relations) are fundamental to the estimation of fluxes or loads in biogeochemical studies. C/Q relations are useful for understanding nutrient, trace element, and contaminant behavior in response to storm and snowmelt related changes in discharge. The shape and seasonal variation of C/Q relations provides information about availability, mobilization, and release of solutes to streams. The properties of C/Q relations can allude to flowpaths, antecedent moisture conditions, and solute availability. Changes in C/Q relations over time for certain constituents like dissolved organic carbon (DOC) may be indicative of changes in supply that may have resulted from changes in climate, vegetation, or land use and land cover. The focus of this presentation is on a simple method for detection of change in C/Q relations using the LOADEST regression model. The LOADEST model fits a seasonally variable C/Q relation to discrete water quality data. For a continuously gauged stream or river, a relatively long record of C/Q data can be partitioned into distinct periods and regression models can be determined for each period. By running each model with the same discharge record and subsequently plotting each flux time series, differences between models can be visualized graphically. Plotting differences between periods (models) illustrates at what times of year the differences are largest. Running each model with a range of discharges for each day of the year provides additional insight into whether the changes in C/Q relations are evident at all levels of discharge or only at specific levels of discharge. The DOC record (1991 to 2014) from a research watershed at Sleepers River in Vermont was used in this analysis. The analysis showed that there have been increases in DOC concentration for certain seasons and rates of discharge.

  14. A Powerful Test for Comparing Multiple Regression Functions.

    PubMed

    Maity, Arnab

    2012-09-01

    In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).

  15. MIS Score: Prediction Model for Minimally Invasive Surgery.

    PubMed

    Hu, Yuanyuan; Cao, Jingwei; Hou, Xianzeng; Liu, Guangcun

    2017-03-01

    Reports suggest that patients with spontaneous intracerebral hemorrhage (ICH) can benefit from minimally invasive surgery, but the inclusion criterion for operation is controversial. This article analyzes factors affecting the 30-day prognoses of patients who have received minimally invasive surgery and proposes a simple grading scale that represents clinical operation effectiveness. The records of 101 patients with spontaneous ICH presenting to Qianfoshan Hospital were reviewed. Factors affecting their 30-day prognosis were identified by logistic regression. A clinical grading scale, the MIS score, was developed by weighting the independent predictors based on these factors. Univariate analysis revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.01), age ≥80 years (P < 0.05), blood glucose (P < 0.01), ICH volume (P < 0.01), operation time (P < 0.05), and presence of intraventricular hemorrhage (P < 0.001). Logistic regression revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.05), age (P < 0.05), ICH volume (P < 0.01), and presence of intraventricular hemorrhage (P < 0.05). The MIS score was developed accordingly; 39 patients with 0-1 MIS scores had favorable prognoses, whereas only 9 patients with 2-5 MIS scores had poor prognoses. The MIS score is a simple grading scale that can be used to select patients who are suited for minimal invasive drainage surgery. When MIS score is 0-1, minimal invasive surgery is strongly recommended for patients with spontaneous cerebral hemorrhage. The scale merits further prospective studies to fully determine its efficacy. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Obstructive sleep apnea exaggerates cognitive dysfunction in stroke patients.

    PubMed

    Zhang, Yan; Wang, Wanhua; Cai, Sijie; Sheng, Qi; Pan, Shenggui; Shen, Fang; Tang, Qing; Liu, Yang

    2017-05-01

    Obstructive sleep apnea (OSA) is very common in stroke survivors. It potentially worsens the cognitive dysfunction and inhibits their functional recovery. However, whether OSA independently damages the cognitive function in stroke patients is unclear. A simple method for evaluating OSA-induced cognitive impairment is also missing. Forty-four stroke patients six weeks after onset and 24 non-stroke patients with snoring were recruited for the polysomnographic study of OSA and sleep architecture. Their cognitive status was evaluated with a validated Chinese version of Cambridge Prospective Memory Test. The relationship between memory deficits and respiratory, sleeping, and dementia-related clinical variables were analyzed with correlation and multiple linear regression tests. OSA significantly and independently damaged time- and event-based prospective memory in stroke patients, although it had less power than the stroke itself. The impairment of prospective memory was correlated with increased apnea-hypopnea index, decreased minimal and mean levels of peripheral oxygen saturation, and disrupted sleeping continuity (reduced sleep efficiency and increased microarousal index). The further regression analysis identified minimal levels of peripheral oxygen saturation and sleep efficiency to be the two most important predictors for the decreased time-based prospective memory in stroke patients. OSA independently contributes to the cognitive dysfunction in stroke patients, potentially through OSA-caused hypoxemia and sleeping discontinuity. The prospective memory test is a simple but sensitive method to detect OSA-induced cognitive impairment in stroke patients. Proper therapies of OSA might improve the cognitive function and increase the life quality of stroke patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  18. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  19. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  20. Actual and estimated costs of disposable materials used during surgical procedures.

    PubMed

    Toyabe, Shin-Ichi; Cao, Pengyu; Kurashima, Sachiko; Nakayama, Yukiko; Ishii, Yuko; Hosoyama, Noriko; Akazawa, Kouhei

    2005-07-01

    It is difficult to estimate precisely the costs of disposable materials used during surgical operations. To evaluate the actual costs of disposable materials, we calculated the actual costs of disposable materials used in 59 operations by taking account of costs of all disposable materials used for each operation. The costs of the disposable materials varied significantly from operation to operation (US$ 38-4230 per operation), and the median [25-percentile and 75-percentile] of the sum total of disposable material costs of a single operation was found to be US$ 686 [205 and 993]. Multiple regression analysis with a stepwise regression method showed that costs of disposable materials significantly correlated only with operation time (p<0.001). Based on the results, we propose a simple method for estimating costs of disposable materials by measuring operation time, and we found that the method gives reliable results. Since costs of disposable materials used during surgical operations are considerable, precise estimation of the costs is essential for hospital cost accounting. Our method should be useful for planning hospital administration strategies.

  1. Development of statistical linear regression model for metals from transportation land uses.

    PubMed

    Maniquiz, Marla C; Lee, Soyoung; Lee, Eunju; Kim, Lee-Hyung

    2009-01-01

    The transportation landuses possessing impervious surfaces such as highways, parking lots, roads, and bridges were recognized as the highly polluted non-point sources (NPSs) in the urban areas. Lots of pollutants from urban transportation are accumulating on the paved surfaces during dry periods and are washed-off during a storm. In Korea, the identification and monitoring of NPSs still represent a great challenge. Since 2004, the Ministry of Environment (MOE) has been engaged in several researches and monitoring to develop stormwater management policies and treatment systems for future implementation. The data over 131 storm events during May 2004 to September 2008 at eleven sites were analyzed to identify correlation relationships between particulates and metals, and to develop simple linear regression (SLR) model to estimate event mean concentration (EMC). Results indicate that there was no significant relationship between metals and TSS EMC. However, the SLR estimation models although not providing useful results are valuable indicators of high uncertainties that NPS pollution possess. Therefore, long term monitoring employing proper methods and precise statistical analysis of the data should be undertaken to eliminate these uncertainties.

  2. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  3. An Experimental Investigation of Mechanical Properties in Clay Brick Masonry by Partial Replacement of Fine Aggregate with Clay Brick Waste

    NASA Astrophysics Data System (ADS)

    Kumavat, Hemraj Ramdas

    2016-09-01

    The compressive stress-strain behavior and mechanical properties of clay brick masonry and its constituents clay bricks and mortar, have been studied by several laboratory tests. Using linear regression analysis, a analytical model has been proposed for obtaining the stress-strain curves for masonry that can be used in the analysis and design procedures. The model requires only the compressive strengths of bricks and mortar as input data, which can be easily obtained experimentally. Development of analytical model from the obtained experimental results of Young's modulus and compressive strength. Simple relationships have been identified for obtaining the modulus of elasticity of bricks, mortar, and masonry from their corresponding compressive strengths. It was observed that the proposed analytical model clearly demonstrates a reasonably good prediction of the stress-strain curves when compared with the experimental curves.

  4. Repressive coping among British college women: A potential protective factor against body image concerns, drive for thinness, and bulimia symptoms.

    PubMed

    Mohiyeddini, Changiz

    2017-09-01

    Repressive coping, as a means of preserving a positive self-image, has been widely explored in the context of dealing with self-evaluative cues. The current study extends this research by exploring whether repressive coping is associated with lower levels of body image concerns, drive for thinness, bulimic symptoms, and higher positive rational acceptance. A sample of 229 female college students was recruited in South London. Repressive coping was measured via the interaction between trait anxiety and defensiveness. The results of moderated regression analysis with simple slope analysis show that compared to non-repressors, repressors reported lower levels of body image concerns, drive for thinness, and bulimic symptoms while exhibiting a higher use of positive rational acceptance. These findings, in line with previous evidence, suggest that repressive coping may be adaptive particularly in the context of body image. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Effects of temperature variation on suicide in five U.S. counties, 1991-2001

    NASA Astrophysics Data System (ADS)

    Dixon, P. G.; McDonald, A. N.; Scheitlin, K. N.; Stapleton, J. E.; Allen, J. S.; Carter, W. M.; Holley, M. R.; Inman, D. D.; Roberts, J. B.

    2007-05-01

    Effects of weather variables on suicide are well-documented, but there is still little consistency among the results of most studies. Nevertheless, most studies show a peak in suicides during the spring season, and this is often attributed to increased temperatures. The purpose of this study is to test the relationship between monthly temperature and monthly suicide, independent of months or seasons, for five counties located across the United States. Harmonic analysis shows that four of the five counties display some seasonal components in the suicide data. However, simple linear regression shows no correlation between suicide and temperature, and discriminant analysis shows that monthly departure from mean annual suicide rates is not a useful tool for identifying months with temperatures that are colder or warmer than the annual average. Therefore, it appears that the seasonality of suicides is due to factors other than temperature.

  6. [Homicide mortality, socioeconomic development, and police violence in the city of São Paulo, Brazil].

    PubMed

    Peres, Maria Fernanda Tourinho; Cardia, Nancy; de Mesquita Neto, Paulo; Dos Santos, Patrícia Carla; Adorno, Sérgio

    2008-04-01

    To analyze the association between police violence and homicide mortality rates taking into consideration the effect of contextual variables. This was an environmental, cross-sectional study that included the 96 census districts in the City of São Paulo. The association between the variables was analyzed using Spearman's rank correlation and simple and multiple regression analysis. Univariate analysis revealed a strong and significant association between homicide mortality coefficients and all the indicators of socioeconomic development and police violence. After controlling for potential confounding factors, the association between police violence and homicide mortality coefficients remained strong and significant. This significance was lost only after control for the size of the resident population. The results indicate that police action that violates basic human rights is not the right answer to urban violence. The combination of homicides from interpersonal violence and deaths from police violence results in negative socialization and promotes further violence.

  7. Comprehensive Chemical Fingerprinting of High-Quality Cocoa at Early Stages of Processing: Effectiveness of Combined Untargeted and Targeted Approaches for Classification and Discrimination.

    PubMed

    Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara

    2017-08-02

    This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.

  8. Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries

    NASA Astrophysics Data System (ADS)

    De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.

    This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.

  9. An evaluation of the range and availability of intensive smoking cessation services in Ireland.

    PubMed

    Currie, L M; Keogan, S; Campbell, P; Gunning, M; Kabir, Z; Clancy, L

    2010-03-01

    A review of smoking cessation (SC) services in Ireland is a necessary step in improving service planning and provision. To assess the range and availability of intensive SC services in Ireland in 2006. A survey of SC service providers in Ireland was conducted. Descriptive analysis and simple linear regression analysis was used. Response rate was 86.3% (63/73). All service providers surveyed are employing evidence-based interventions; the most common form of support is individual counselling with initial sessions averaging 40 min and weekly review sessions 20 min in duration. Reaching the recommended target of treating 5.0% of smokers does not seem feasible given the current distribution of resources and there appears to be regional differences in resource allocation. While intensive SC services are available in all four Health Service Executive Areas, it would appear that there is little uniformity or consistency countrywide in the scope and structure of these services.

  10. Spectrophotometric determination of 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride by flow injection analysis.

    PubMed

    Seno, Kunihiko; Matumura, Kazuki; Oshima, Mitsuko; Motomizu, Shoji

    2008-04-01

    1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC.HCl) is a very useful agent to form amide bonds (peptide bonds) in an aqueous medium. A simple and fast detection system was developed using the reaction with pyridine and ethylenediamine in acidic aqueous solution and spectrophotometric flow injection analysis. The absorbances were measured at 400 nm and the reaction was accelerated at 40 degrees C. The calibration graph showed good linearity from 0 to 10% of EDC.HCl solutions: the regression equation was y=3.15x10(4)x (y, peak area; x, % concentration of EDC.HCl). The RSD was under 1.0%. Sample throughput was 15 h(-1). This method was applied to monitoring the EDC.HCl concentration that remained after the anhydration of phthalic acid in water, esterification of acetic acid in methanol or dehydration condensation of malonic acid and ethylenediamine in water.

  11. The effects of daily weather variables on psychosis admissions to psychiatric hospitals

    NASA Astrophysics Data System (ADS)

    McWilliams, Stephen; Kinsella, Anthony; O'Callaghan, Eadbhard

    2013-07-01

    Several studies have noted seasonal variations in admission rates of patients with psychotic illnesses. However, the changeable daily meteorological patterns within seasons have never been examined in any great depth in the context of admission rates. A handful of small studies have posed interesting questions regarding a potential link between psychiatric admission rates and meteorological variables such as environmental temperature (especially heat waves) and sunshine. In this study, we used simple non-parametric testing and more complex ARIMA and time-series regression analysis to examine whether daily meteorological patterns (wind speed and direction, barometric pressure, rainfall, sunshine, sunlight and temperature) exert an influence on admission rates for psychotic disorders across 12 regions in Ireland. Although there were some weak but interesting trends for temperature, barometric pressure and sunshine, the meteorological patterns ultimately did not exert a clinically significant influence over admissions for psychosis. Further analysis is needed.

  12. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  13. Spatial scale analysis in geophysics - Integrating surface and borehole geophysics in groundwater studies

    USGS Publications Warehouse

    Paillet, Frederick L.; Singhroy, V.H.; Hansen, D.T.; Pierce, R.R.; Johnson, A.I.

    2002-01-01

    Integration of geophysical data obtained at various scales can bridge the gap between localized data from boreholes and site-wide data from regional survey profiles. Specific approaches to such analysis include: 1) comparing geophysical measurements in boreholes with the same measurement made from the surface; 2) regressing geophysical data obtained in boreholes with water-sample data from screened intervals; 3) using multiple, physically independent measurements in boreholes to develop multivariate response models for surface geophysical surveys; 4) defining subsurface cell geometry for most effective survey inversion methods; and 5) making geophysical measurements in boreholes to serve as independent verification of geophysical interpretations. Integrated analysis of surface electromagnetic surveys and borehole geophysical logs at a study site in south Florida indicates that salinity of water in the surficial aquifers is controlled by a simple wedge of seawater intrusion along the coast and by a complex pattern of upward brine seepage from deeper aquifers throughout the study area. This interpretation was verified by drilling three additional test boreholes in carefully selected locations.

  14. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  15. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Relative Motion of the WDS 05110+3203 STF 648 System, With a Protocol for Calculating Relative Motion

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2010-07-01

    Relative motion studies of visual double stars can be investigated using least squares regression techniques and readily accessible programs such as Microsoft Excel and a calculator. Optical pairs differ from physical pairs under most geometries in both their simple scatter plots and their regression models. A step-by-step protocol for estimating the rectilinear elements of an optical pair is presented. The characteristics of physical pairs using these techniques are discussed.

  17. An introduction to g methods.

    PubMed

    Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H

    2017-04-01

    Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  18. Linear regression models for solvent accessibility prediction in proteins.

    PubMed

    Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2005-04-01

    The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple and computationally much more efficient linear SVR performs comparably to nonlinear models and thus can be used in order to facilitate further attempts to design more accurate RSA prediction methods, with applications to fold recognition and de novo protein structure prediction methods.

  19. Moderation analysis with missing data in the predictors.

    PubMed

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Qualitative Analysis of Dairy and Powder Milk Using Laser-Induced Breakdown Spectroscopy (LIBS).

    PubMed

    Alfarraj, Bader A; Sanghapi, Herve K; Bhatt, Chet R; Yueh, Fang Y; Singh, Jagdish P

    2018-01-01

    Laser-induced breakdown spectroscopy (LIBS) technique was used to compare various types of commercial milk products. Laser-induced breakdown spectroscopy spectra were investigated for the determination of the elemental composition of soy and rice milk powder, dairy milk, and lactose-free dairy milk. The analysis was performed using radiative transitions. Atomic emissions from Ca, K, Na, and Mg lines observed in LIBS spectra of dairy milk were compared. In addition, proteins and fat level in milks can be determined using molecular emissions such as CN bands. Ca concentrations were calculated to be 2.165 ± 0.203 g/L in 1% of dairy milk fat samples and 2.809 ± 0.172 g/L in 2% of dairy milk fat samples using the standard addition method (SAM) with LIBS spectra. Univariate and multivariate statistical analysis methods showed that the contents of major mineral elements were higher in lactose-free dairy milk than those in dairy milk. The principal component analysis (PCA) method was used to discriminate four milk samples depending on their mineral elements concentration. In addition, proteins and fat level in dairy milks were determined using molecular emissions such as CN band. We applied partial least squares regression (PLSR) and simple linear regression (SLR) models to predict levels of milk fat in dairy milk samples. The PLSR model was successfully used to predict levels of milk fat in dairy milk sample with the relative accuracy (RA%) less than 6.62% using CN (0,0) band.

  1. Do drug treatment variables predict cognitive performance in multidrug-treated opioid-dependent patients? A regression analysis study

    PubMed Central

    2012-01-01

    Background Cognitive deficits and multiple psychoactive drug regimens are both common in patients treated for opioid-dependence. Therefore, we examined whether the cognitive performance of patients in opioid-substitution treatment (OST) is associated with their drug treatment variables. Methods Opioid-dependent patients (N = 104) who were treated either with buprenorphine or methadone (n = 52 in both groups) were given attention, working memory, verbal, and visual memory tests after they had been a minimum of six months in treatment. Group-wise results were analysed by analysis of variance. Predictors of cognitive performance were examined by hierarchical regression analysis. Results Buprenorphine-treated patients performed statistically significantly better in a simple reaction time test than methadone-treated ones. No other significant differences between groups in cognitive performance were found. In each OST drug group, approximately 10% of the attention performance could be predicted by drug treatment variables. Use of benzodiazepine medication predicted about 10% of performance variance in working memory. Treatment with more than one other psychoactive drug (than opioid or BZD) and frequent substance abuse during the past month predicted about 20% of verbal memory performance. Conclusions Although this study does not prove a causal relationship between multiple prescription drug use and poor cognitive functioning, the results are relevant for psychosocial recovery, vocational rehabilitation, and psychological treatment of OST patients. Especially for patients with BZD treatment, other treatment options should be actively sought. PMID:23121989

  2. White Blood Cells, Neutrophils, and Reactive Oxygen Metabolites among Asymptomatic Subjects.

    PubMed

    Kotani, Kazuhiko; Sakane, Naoki

    2012-06-01

    Chronic inflammation and oxidative stress are associated with health and the disease status. The objective of the present study was to investigate the association among white blood cell (WBC) counts, neutrophil counts as a WBC subpopulation, and diacron reactive oxygen metabolites (d-ROMs) levels in an asymptomatic population. The clinical data, including general cardiovascular risk variables and high-sensitivity C-reactive protein (hs-CRP), were collected from 100 female subjects (mean age, 62 years) in outpatient clinics. The correlation of the d-ROMs with hs-CRP, WBC, and neutrophil counts was examined. The mean/median levels were WBC counts 5.9 × 10(9)/L, neutrophil counts 3.6 × 10(9)/L, hs-CRP 0.06 mg/dL, and d-ROMs 359 CURR U. A simple correlation analysis showed a significant positive correlation of the d-ROMs with the WBC counts, neutrophil counts, or hs-CRP levels. The correlation between d-ROMs and neutrophil counts (β = 0.22, P < 0.05), as well as that between d-ROMs and hs-CRP (β = 0.28, P < 0.01), remained significant and independent in a multiple linear regression analysis adjusted for other variables. A multiple linear regression analysis showed that WBC counts had only a positive correlation tendency to the d-ROMs. Neutrophils may be slightly but more involved in the oxidative stress status, as assessed by d-ROMs, in comparison to the overall WBC. Further studies are needed to clarify the biologic mechanism(s) of the observed relationship.

  3. Effective role of lady health workers in immunization of children in Pakistan.

    PubMed

    Afzal, Saira; Naeem, Azka; Shahid, Unaiza; Noor Syed, Wajiha; Khan, Urva; Misal Zaidi, Nayyar

    2016-01-01

    To determine the association of Lady Health Worker's role with immunization of children in Pakistan. Secondary analysis was conducted on data obtained from Pakistan's Demographic and Health Survey. Children who did not receive all doses of vaccines were considered incompletely immunized or vice versa. The association between determinants was assessed by simple and multivariable binary logistic regression. The mothers and fathers had a mean age of 32.7 (SD+8.6) years and 37.9 (SD +10.1) years, respectively. Age of mother greater than 35 (OR=0.93; 95% CI:0.70-1.25); born in Baluchistan (OR=3.47,95% CI:2.21-5.49); rural area dwellers (OR=2.04; 95% CI:1.65-2.51); female gender (OR=1.06; 95% CI:0.87-1.29); birth order (of last born child) greater than 7 (OR=2.21, 95% CI:1.60-3.06); delivered at home (OR=2.20, 95% CI:1.76-2.74); long distance to health care facility (OR=2.66, 95% CI:2.16-3.28); and no LHW visit in last 12 months (OR=1.91, CI:1.48-2.47) were significantly associated with incomplete immunization in bivariate analysis. In final model of multinomial regression analysis the absence of visit by LHW in last 12 months was the most significant factor when all risk factors were analyzed in last model. This study has concluded that visit of LHW in last 12 months was significantly associated with immunization.

  4. Lesion size affects diagnostic performance of IOTA logistic regression models, IOTA simple rules and risk of malignancy index in discriminating between benign and malignant adnexal masses.

    PubMed

    Di Legge, A; Testa, A C; Ameye, L; Van Calster, B; Lissoni, A A; Leone, F P G; Savelli, L; Franchi, D; Czekierdowski, A; Trio, D; Van Holsbeke, C; Ferrazzi, E; Scambia, G; Timmerman, D; Valentin, L

    2012-09-01

    To estimate the ability to discriminate between benign and malignant adnexal masses of different size using: subjective assessment, two International Ovarian Tumor Analysis (IOTA) logistic regression models (LR1 and LR2), the IOTA simple rules and the risk of malignancy index (RMI). We used a multicenter IOTA database of 2445 patients with at least one adnexal mass, i.e. the database previously used to prospectively validate the diagnostic performance of LR1 and LR2. The masses were categorized into three subgroups according to their largest diameter: small tumors (diameter < 4 cm; n = 396), medium-sized tumors (diameter, 4-9.9 cm; n = 1457) and large tumors (diameter ≥ 10 cm, n = 592). Subjective assessment, LR1 and LR2, IOTA simple rules and the RMI were applied to each of the three groups. Sensitivity, specificity, positive and negative likelihood ratio (LR+, LR-), diagnostic odds ratio (DOR) and area under the receiver-operating characteristics curve (AUC) were used to describe diagnostic performance. A moving window technique was applied to estimate the effect of tumor size as a continuous variable on the AUC. The reference standard was the histological diagnosis of the surgically removed adnexal mass. The frequency of invasive malignancy was 10% in small tumors, 19% in medium-sized tumors and 40% in large tumors; 11% of the large tumors were borderline tumors vs 3% and 4%, respectively, of the small and medium-sized tumors. The type of benign histology also differed among the three subgroups. For all methods, sensitivity with regard to malignancy was lowest in small tumors (56-84% vs 67-93% in medium-sized tumors and 74-95% in large tumors) while specificity was lowest in large tumors (60-87%vs 83-95% in medium-sized tumors and 83-96% in small tumors ). The DOR and the AUC value were highest in medium-sized tumors and the AUC was largest in tumors with a largest diameter of 7-11 cm. Tumor size affects the performance of subjective assessment, LR1 and LR2, the IOTA simple rules and the RMI in discriminating correctly between benign and malignant adnexal masses. The likely explanation, at least in part, is the difference in histology among tumors of different size. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  5. Language functions in preterm-born children: a systematic review and meta-analysis.

    PubMed

    van Noort-van der Spek, Inge L; Franken, Marie-Christine J P; Weisglas-Kuperus, Nynke

    2012-04-01

    Preterm-born children (<37 weeks' gestation) have higher rates of language function problems compared with term-born children. It is unknown whether these problems decrease, deteriorate, or remain stable over time. The goal of this research was to determine the developmental course of language functions in preterm-born children from 3 to 12 years of age. Computerized databases Embase, PubMed, Web of Knowledge, and PsycInfo were searched for studies published between January 1995 and March 2011 reporting language functions in preterm-born children. Outcome measures were simple language function assessed by using the Peabody Picture Vocabulary Test and complex language function assessed by using the Clinical Evaluation of Language Fundamentals. Pooled effect sizes (in terms of Cohen's d) and 95% confidence intervals (CI) for simple and complex language functions were calculated by using random-effects models. Meta-regression was conducted with mean difference of effect size as the outcome variable and assessment age as the explanatory variable. Preterm-born children scored significantly lower compared with term-born children on simple (d = -0.45 [95% CI: -0.59 to -0.30]; P < .001) and on complex (d = -0.62 [95% CI: -0.82 to -0.43]; P < .001) language function tests, even in the absence of major disabilities and independent of social economic status. For complex language function (but not for simple language function), group differences between preterm- and term-born children increased significantly from 3 to 12 years of age (slope = -0.05; P = .03). While growing up, preterm-born children have increasing difficulties with complex language function.

  6. Development and Validation of a Simple High Performance Liquid Chromatography/UV Method for Simultaneous Determination of Urinary Uric Acid, Hypoxanthine, and Creatinine in Human Urine.

    PubMed

    Wijemanne, Nimanthi; Soysa, Preethi; Wijesundara, Sulochana; Perera, Hemamali

    2018-01-01

    Uric acid and hypoxanthine are produced in the catabolism of purine. Abnormal urinary levels of these products are associated with many diseases and therefore it is necessary to have a simple and rapid method to detect them. Hence, we report a simple reverse phase high performance liquid chromatography (HPLC/UV) technique, developed and validated for simultaneous analysis of uric acid, hypoxanthine, and creatinine in human urine. Urine was diluted appropriately and eluted with C-18 column 100 mm × 4.6 mm with a C-18 precolumn 25 mm × 4.6 mm in series. Potassium phosphate buffer (20 mM, pH 7.25) at a flow rate of 0.40 mL/min was employed as the solvent and peaks were detected at 235 nm. Tyrosine was used as the internal standard. The experimental conditions offered a good separation of analytes without interference of endogenous substances. The calibration curves were linear for all test compounds with a regression coefficient, r 2 > 0.99. Uric acid, creatinine, tyrosine, and hypoxanthine were eluted at 5.2, 6.1, 7.2, and 8.3 min, respectively. Intraday and interday variability were less than 4.6% for all the analytes investigated and the recovery ranged from 98 to 102%. The proposed HPLC procedure is a simple, rapid, and low cost method with high accuracy with minimum use of organic solvents. This method was successfully applied for the determination of creatinine, hypoxanthine, and uric acid in human urine.

  7. Classification of Large-Scale Remote Sensing Images for Automatic Identification of Health Hazards: Smoke Detection Using an Autologistic Regression Classifier.

    PubMed

    Wolters, Mark A; Dean, C B

    2017-01-01

    Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.

  8. The Staff Observation Aggression Scale - Revised (SOAS-R) - adjustment and validation for emergency primary health care.

    PubMed

    Morken, Tone; Baste, Valborg; Johnsen, Grethe E; Rypdal, Knut; Palmstierna, Tom; Johansen, Ingrid Hjulstad

    2018-05-08

    Many emergency primary health care workers experience aggressive behaviour from patients or visitors. Simple incident-reporting procedures exist for inpatient, psychiatric care, but a similar and simple incident-report for other health care settings is lacking. The aim was to adjust a pre-existing form for reporting aggressive incidents in a psychiatric inpatient setting to the emergency primary health care settings. We also wanted to assess the validity of the severity scores in emergency primary health care. The Staff Observation Scale - Revised (SOAS-R) was adjusted to create a pilot version of the Staff Observation Scale - Revised Emergency (SOAS-RE). A Visual Analogue Scale (VAS) was added to the form to judge the severity of the incident. Data for validation of the pilot version of SOAS-RE were collected from ten casualty clinics in Norway during 12 months. Variance analysis was used to test gender and age differences. Linear regression analysis was performed to evaluate the relative impact that each of the five SOAS-RE columns had on the VAS score. The association between SOAS-RE severity score and VAS severity score was calculated by the Pearson correlation coefficient. The SOAS-R was adjusted to emergency primary health care, refined and called The Staff Observation Aggression Scale - Revised Emergency (SOAS-RE). A total of 350 SOAS-RE forms were collected from the casualty clinics, but due to missing data, 291 forms were included in the analysis. SOAS-RE scores ranged from 1 to 22. The mean total severity score of SOAS-RE was 10.0 (standard deviation (SD) =4.1) and the mean VAS score was 45.4 (SD = 26.7). We found a significant correlation of 0.45 between the SOAS-RE total severity scores and the VAS severity ratings. The linear regression analysis showed that individually each of the categories, which described the incident, had a low impact on the VAS score. The SOAS-RE seems to be a useful instrument for research, incident-recording and management of incidents in emergency primary care. The moderate correlation between SOAS-RE severity score and the VAS severity score shows that application of both the severity ratings is valuable to follow-up of workers affected by workplace violence.

  9. Several steps/day indicators predict changes in anthropometric outcomes: HUB City Steps.

    PubMed

    Thomson, Jessica L; Landry, Alicia S; Zoellner, Jamie M; Tudor-Locke, Catrine; Webster, Michael; Connell, Carol; Yadrick, Kathy

    2012-11-15

    Walking for exercise remains the most frequently reported leisure-time activity, likely because it is simple, inexpensive, and easily incorporated into most people's lifestyle. Pedometers are simple, convenient, and economical tools that can be used to quantify step-determined physical activity. Few studies have attempted to define the direct relationship between dynamic changes in pedometer-determined steps/day and changes in anthropometric and clinical outcomes. Hence, the objective of this secondary analysis was to evaluate the utility of several descriptive indicators of pedometer-determined steps/day for predicting changes in anthropometric and clinical outcomes using data from a community-based walking intervention, HUB City Steps, conducted in a southern, African American population. A secondary aim was to evaluate whether treating steps/day data for implausible values affected the ability of these data to predict intervention-induced changes in clinical and anthropometric outcomes. The data used in this secondary analysis were collected in 2010 from 269 participants in a six-month walking intervention targeting a reduction in blood pressure. Throughout the intervention, participants submitted weekly steps/day diaries based on pedometer self-monitoring. Changes (six-month minus baseline) in anthropometric (body mass index, waist circumference, percent body fat [%BF], fat mass) and clinical (blood pressure, lipids, glucose) outcomes were evaluated. Associations between steps/day indicators and changes in anthropometric and clinical outcomes were assessed using bivariate tests and multivariable linear regression analysis which controlled for demographic and baseline covariates. Significant negative bivariate associations were observed between steps/day indicators and the majority of anthropometric and clinical outcome changes (r = -0.3 to -0.2: P < 0.05). After controlling for covariates in the regression analysis, only the relationships between steps/day indicators and changes in anthropometric (not clinical) outcomes remained significant. For example, a 1,000 steps/day increase in intervention mean steps/day resulted in a 0.1% decrease in %BF. Results for the three pedometer datasets (full, truncated, and excluded) were similar and yielded few meaningful differences in interpretation of the findings. Several descriptive indicators of steps/day may be useful for predicting anthropometric outcome changes. Further, manipulating steps/day data to address implausible values has little overall effect on the ability to predict these anthropometric changes.

  10. Development and testing of new candidate psoriatic arthritis screening questionnaires combining optimal questions from existing tools.

    PubMed

    Coates, Laura C; Walsh, Jessica; Haroon, Muhammad; FitzGerald, Oliver; Aslam, Tariq; Al Balushi, Farida; Burden, A D; Burden-Teh, Esther; Caperon, Anna R; Cerio, Rino; Chattopadhyay, Chandrabhusan; Chinoy, Hector; Goodfield, Mark J D; Kay, Lesley; Kelly, Stephen; Kirkham, Bruce W; Lovell, Christopher R; Marzo-Ortega, Helena; McHugh, Neil; Murphy, Ruth; Reynolds, Nick J; Smith, Catherine H; Stewart, Elizabeth J C; Warren, Richard B; Waxman, Robin; Wilson, Hilary E; Helliwell, Philip S

    2014-09-01

    Several questionnaires have been developed to screen for psoriatic arthritis (PsA), but head-to-head studies have found limitations. This study aimed to develop new questionnaires encompassing the most discriminative questions from existing instruments. Data from the CONTEST study, a head-to-head comparison of 3 existing questionnaires, were used to identify items with a Youden index score of ≥0.1. These were combined using 4 approaches: CONTEST (simple additions of questions), CONTESTw (weighting using logistic regression), CONTESTjt (addition of a joint manikin), and CONTESTtree (additional questions identified by classification and regression tree [CART] analysis). These candidate questionnaires were tested in independent data sets. Twelve individual questions with a Youden index score of ≥0.1 were identified, but 4 of these were excluded due to duplication and redundancy. Weighting for 2 of these questions was included in CONTESTw. Receiver operating characteristic (ROC) curve analysis showed that involvement in 6 joint areas on the manikin was predictive of PsA for inclusion in CONTESTjt. CART analysis identified a further 5 questions for inclusion in CONTESTtree. CONTESTtree was not significant on ROC curve analysis and discarded. The other 3 questionnaires were significant in all data sets, although CONTESTw was slightly inferior to the others in the validation data sets. Potential cut points for referral were also discussed. Of 4 candidate questionnaires combining existing discriminatory items to identify PsA in people with psoriasis, 3 were found to be significant on ROC curve analysis. Testing in independent data sets identified 2 questionnaires (CONTEST and CONTESTjt) that should be pursued for further prospective testing. Copyright © 2014 by the American College of Rheumatology.

  11. Identification and detection of simple 3D objects with severely blurred vision.

    PubMed

    Kallie, Christopher S; Legge, Gordon E; Yu, Deyue

    2012-12-05

    Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10-24 feet, or 3.05-7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2-6 feet, or 0.61-1.83 m), and color (gray and white). Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed.

  12. Research on the effects of urbanization on small stream flow quantity

    DOT National Transportation Integrated Search

    1978-12-01

    This study is a preliminary investigation into the feasibility of using simple techniques to evaluate the effects of urbanization on flood flows in small streams. A number of regression techniques and computer simulation techniques were evaluated, an...

  13. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    NASA Astrophysics Data System (ADS)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  14. On-line capacity-building program on "analysis of data" for medical educators in the South Asia region: a qualitative exploration of our experience.

    PubMed

    Dongre, A R; Chacko, T V; Banu, S; Bhandary, S; Sahasrabudhe, R A; Philip, S; Deshmukh, P R

    2010-11-01

    In medical education, using the World Wide Web is a new approach for building the capacity of faculty. However, there is little information available on medical education researchers' needs and their collective learning outcomes in such on-line environments. Hence, the present study attempted: 1)to identify needs for capacity-building of fellows in a faculty development program on the topic of data analysis; and 2) to describe, analyze and understand the collective learning outcomes of the fellows during this need-based on-line session. The present research is based on quantitative (on-line survey for needs assessment) and qualitative (contents of e-mails exchanged in listserv discussion) data which were generated during the October 2009 Mentoring and Learning (M-L) Web discussion on the topic of data analysis. The data sources were shared e-mail responses during the process of planning and executing the M-L Web discussion. Content analysis was undertaken and the categories of discussion were presented as a simple non-hierarchical typology which represents the collective learning of the project fellows. We identified the types of learning needs on the topic 'Analysis of Data' to be addressed for faculty development in the field of education research. This need-based M-L Web discussion could then facilitate collective learning on such topics as 'basic concepts in statistics', tests of significance, Likert scale analysis, bivariate correlation, and simple regression analysis and content analysis of qualitative data. Steps like identifying the learning needs for an on-line M-L Web discussion, addressing the immediate needs of learners and creating a flexible reflective learning environment on the M-L Web facilitated the collective learning of the fellows on the topic of data analysis. Our outcomes can be useful in the design of on-line pedagogical strategies for supporting research in medical education.

  15. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  16. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  17. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  18. Family support, self-esteem, and perceived racial discrimination among Asian American male college students.

    PubMed

    Wei, Meifen; Yeh, Christine Jean; Chao, Ruth Chu-Lien; Carrera, Stephanie; Su, Jenny C

    2013-07-01

    This study was conducted to examine under what situation (i.e., when individuals used more or less family support) and for whom (i.e., those with high or low self-esteem) perceived racial discrimination would or would not have a significant positive association with psychological distress. A total of 95 Asian American male college students completed an online survey. A hierarchical regression analysis indicated a significant 3-way interaction of family support, self-esteem, and perceived racial discrimination in predicting psychological distress after controlling for perceived general stress. A simple effect analysis was used to explore the nature of the interaction. When Asian American male college students used more family support to cope with racial discrimination, the association between perceived racial discrimination and psychological distress was not significant for those with high or low self-esteem. The result from the simple interaction indicated that, when more family support was used, the 2 slopes for high and low self-esteem were not significantly different from each other. Conversely, when they used less family support, the association between perceived racial discrimination and psychological distress was not significant for those with high self-esteem, but was significantly positive for those with low self-esteem. The result from the simple interaction indicated that, when less family support was used, the slopes for high and low self-esteem were significantly different. The result suggested that low use of family support may put these male students with low self-esteem at risk for psychological distress. Limitations, future research directions, and clinical implications were discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. Penalized regression procedures for variable selection in the potential outcomes framework

    PubMed Central

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L.

    2015-01-01

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple ‘impute, then select’ class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems, and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data and imputation are drawn. A difference LASSO algorithm is defined, along with its multiple imputation analogues. The procedures are illustrated using a well-known right heart catheterization dataset. PMID:25628185

  20. A methodology for the design of experiments in computational intelligence with multiple regression models.

    PubMed

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  1. A methodology for the design of experiments in computational intelligence with multiple regression models

    PubMed Central

    Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952

  2. ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.

    PubMed

    Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won

    2016-07-01

    In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.

  3. Influence of anthropometric parameters on ultrasound measurements of Os calcis.

    PubMed

    Hans, D; Schott, A M; Arlot, M E; Sornay, E; Delmas, P D; Meunier, P J

    1995-01-01

    Few data have been published concerning the influence of height, weight and body mass index (BMI) on broadband ultrasound attenuation (BUA), speed of sound (SOS) and Lunar "stiffness" index, and always in small population samples. The first ain of the present cross-sectional study was to determine whether anthropometric factors have a significant influence on ultrasound measurements. The second objective was to establish whether these parameters have real effect on whether their influence is due only to measurement errors. We measured, in 271 healthy French women (mean age 77 +/- 11 years; range 31-97 years), the following parameters: age, height, weight, lean and fat body mass, heel width, foot length, knee height and external malleolus (HEM). Simple linear regression analyses between ultrasound and anthropometric parameters were performed. Age, height, and heel width were significant predictors of SOS; age, height, weight, foot length, heel width, HEM, fat mass and lean mass were significant predictors of BUA; age, height, weight, heel width, HEM, fat mass and lean mass were significant predictors of stiffness. In the multiple regression analysis, once the analysis had been adjusted for age, only heel width was a significant predictor for SOS (p = 0.0007), weight for BUA (p = 0.0001), and weight (p = 0.0001) and heel width (p = 0.004) for the stiffness index. Besides their statistical meaning, the regression coefficients have a more clinically relevant interpretation which is developed in the text. These results confirm the influence of anthropometric factors on the ultrasonic parameter values, because BUA and SOS were in part dependent on heel width and weight. The influence of the position of the transducer on the calcaneus should be taken into account to optimize the methods of measurement using ultrasound.

  4. Statistical performance of image cytometry for DNA, lipids, cytokeratin, & CD45 in a model system for circulation tumor cell detection.

    PubMed

    Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A

    2017-07-01

    Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  5. Hypoalbuminaemia predicts outcome in adult patients with congenital heart disease

    PubMed Central

    Kempny, Aleksander; Diller, Gerhard-Paul; Alonso-Gonzalez, Rafael; Uebing, Anselm; Rafiq, Isma; Li, Wei; Swan, Lorna; Hooper, James; Donovan, Jackie; Wort, Stephen J; Gatzoulis, Michael A; Dimopoulos, Konstantinos

    2015-01-01

    Background In patients with acquired heart failure, hypoalbuminaemia is associated with increased risk of death. The prevalence of hypoproteinaemia and hypoalbuminaemia and their relation to outcome in adult patients with congenital heart disease (ACHD) remains, however, unknown. Methods Data on patients with ACHD who underwent blood testing in our centre within the last 14 years were collected. The relation between laboratory, clinical or demographic parameters at baseline and mortality was assessed using Cox proportional hazards regression analysis. Results A total of 2886 patients with ACHD were included. Mean age was 33.3 years (23.6–44.7) and 50.1% patients were men. Median plasma albumin concentration was 41.0 g/L (38.0–44.0), whereas hypoalbuminaemia (<35 g/L) was present in 13.9% of patients. The prevalence of hypoalbuminaemia was significantly higher in patients with great complexity ACHD (18.2%) compared with patients with moderate (11.3%) or simple ACHD lesions (12.1%, p<0.001). During a median follow-up of 5.7 years (3.3–9.6), 327 (11.3%) patients died. On univariable Cox regression analysis, hypoalbuminaemia was a strong predictor of outcome (HR 3.37, 95% CI 2.67 to 4.25, p<0.0001). On multivariable Cox regression, after adjusting for age, sodium and creatinine concentration, liver dysfunction, functional class and disease complexity, hypoalbuminaemia remained a significant predictor of death. Conclusions Hypoalbuminaemia is common in patients with ACHD and is associated with a threefold increased risk of risk of death. Hypoalbuminaemia, therefore, should be included in risk-stratification algorithms as it may assist management decisions and timing of interventions in the growing ACHD population. PMID:25736048

  6. A comparison of long-term parallel measurements of sunshine duration obtained with a Campbell-Stokes sunshine recorder and two automated sunshine sensors

    NASA Astrophysics Data System (ADS)

    Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.

    2017-06-01

    In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.

  7. Suspicion of respiratory tract infection with multidrug-resistant Enterobacteriaceae: epidemiology and risk factors from a Paediatric Intensive Care Unit.

    PubMed

    Renk, Hanna; Stoll, Lenja; Neunhoeffer, Felix; Hölzl, Florian; Kumpf, Matthias; Hofbeck, Michael; Hartl, Dominik

    2017-02-21

    Multidrug-resistant (MDR) infections are a serious concern for children admitted to the Paediatric Intensive Care Unit (PICU). Tracheal colonization with MDR Enterobacteriaceae predisposes to respiratory infection, but underlying risk factors are poorly understood. This study aims to determine the incidence of children with suspected infection during mechanical ventilation and analyses risk factors for the finding of MDR Enterobacteriaceae in tracheal aspirates. A retrospective single-centre analysis of Enterobacteriaceae isolates from the lower respiratory tract of ventilated PICU patients from 2005 to 2014 was performed. Resistance status was determined and clinical records were reviewed for potential risk factors. A classification and regression tree (CRT) to predict risk factors for infection with MDR Enterobacteriaceae was employed. The model was validated by simple and multivariable logistic regression. One hundred sixty-seven Enterobacteriaceae isolates in 123 children were identified. The most frequent isolates were Enterobacter spp., Klebsiella spp. and E.coli. Among these, 116 (69%) isolates were susceptible and 51 (31%) were MDR. In the CRT analysis, antibiotic exposure for ≥ 7 days and presence of gastrointestinal comorbidity were the most relevant predictors for an MDR isolate. Antibiotic exposure for ≥ 7 days was confirmed as a significant risk factor for infection with MDR Enterobacteriaceae by a multivariable logistic regression model. This study shows that critically-ill children with tracheal Enterobacteriaceae infection are at risk of carrying MDR isolates. Prior use of antibiotics for ≥ 7 days significantly increased the risk of finding MDR organisms in ventilated PICU patients with suspected infection. Our results imply that early identification of patients at risk, rapid microbiological diagnostics and tailored antibiotic therapy are essential to improve management of critically ill children infected with Enterobacteriaceae.

  8. Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl

    2017-09-01

    Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.

  9. Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis

    PubMed Central

    Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.

    2006-01-01

    In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709

  10. Computer Simulation of Human Service Program Evaluations.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; Davis, James E.

    1985-01-01

    Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…

  11. From Interaction to Co-Association —A Fisher r-To-z Transformation-Based Simple Statistic for Real World Genome-Wide Association Study

    PubMed Central

    Yuan, Zhongshang; Liu, Hong; Zhang, Xiaoshuai; Li, Fangyu; Zhao, Jinghua; Zhang, Furen; Xue, Fuzhong

    2013-01-01

    Currently, the genetic variants identified by genome wide association study (GWAS) generally only account for a small proportion of the total heritability for complex disease. One crucial reason is the underutilization of gene-gene joint effects commonly encountered in GWAS, which includes their main effects and co-association. However, gene-gene co-association is often customarily put into the framework of gene-gene interaction vaguely. From the causal graph perspective, we elucidate in detail the concept and rationality of gene-gene co-association as well as its relationship with traditional gene-gene interaction, and propose two Fisher r-to-z transformation-based simple statistics to detect it. Three series of simulations further highlight that gene-gene co-association refers to the extent to which the joint effects of two genes differs from the main effects, not only due to the traditional interaction under the nearly independent condition but the correlation between two genes. The proposed statistics are more powerful than logistic regression under various situations, cannot be affected by linkage disequilibrium and can have acceptable false positive rate as long as strictly following the reasonable GWAS data analysis roadmap. Furthermore, an application to gene pathway analysis associated with leprosy confirms in practice that our proposed gene-gene co-association concepts as well as the correspondingly proposed statistics are strongly in line with reality. PMID:23923021

  12. The significance of organ prolapse in gastroschisis.

    PubMed

    Koehler, Shannon M; Szabo, Aniko; Loichinger, Matt; Peterson, Erika; Christensen, Melissa; Wagner, Amy J

    2017-12-01

    The aim of this study was to evaluate the incidence and importance of organ prolapse (stomach, bladder, reproductive organs) in gastroschisis. This is a retrospective review of gastroschisis patients from 2000 to 2014 at a single tertiary institution. Statistical analysis was performed using a chi-square test, Student's t test, log-rank test, or Cox regression analysis models. All tests were conducted as two-tailed tests, and p-values <0.05 were considered statistically significant. One hundred seventy-one gastroschisis patients were identified. Sixty-nine (40.6%) had at least one prolapsed organ besides bowel. The most commonly prolapsed organs were stomach (n=45, 26.3%), reproductive organs (n=34, 19.9%), and bladder (n=15, 8.8%). Patients with prolapsed organs were more likely to have simple gastroschisis with significant decreases in the rate of atresia and necrosis/perforation. They progressed to earlier enteral feeds, discontinuation of parenteral nutrition, and discharge. Likewise, these patients were less likely to have complications such as central line infections, sepsis, and short gut syndrome. Gastroschisis is typically described as isolated bowel herniation, but a large portion have prolapse of other organs. Prolapsed organs are associated with simple gastroschisis, and improved outcomes most likely due to a larger fascial defect. This may be useful for prenatal and postnatal counseling of families. Case Control/Retrospective Comparative Study. Level III. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Characterizing error distributions for MISR and MODIS optical depth data

    NASA Astrophysics Data System (ADS)

    Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.

    2008-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.

  14. Incidence and mortality of lung cancer: global trends and association with socioeconomic status.

    PubMed

    Wong, Martin C S; Lao, Xiang Qian; Ho, Kin-Fai; Goggins, William B; Tse, Shelly L A

    2017-10-30

    We examined the correlation between lung cancer incidence/mortality and country-specific socioeconomic development, and evaluated its most recent global trends. We retrieved its age-standardized incidence rates from the GLOBOCAN database, and temporal patterns were assessed from global databases. We employed simple linear regression analysis to evaluate their correlations with Human Development Index (HDI) and Gross Domestic Product (GDP) per capita. The average annual percent changes (AAPC) of the trends were evaluated from join-point regression analysis. Country-specific HDI was strongly correlated with age-standardized incidence (r = 0.70) and mortality (r = 0.67), and to a lesser extent GDP (r = 0.24 to 0.55). Among men, 22 and 30 (out of 38 and 36) countries showed declining incidence and mortality trends, respectively; whilst among women, 19 and 16 countries showed increasing incidence and mortality trends, respectively. Among men, the AAPCs ranged from -2.8 to -0.6 (incidence) and -3.6 to -1.1 (mortality) in countries with declining trend, whereas among women the AAPC range was 0.4 to 8.9 (incidence) and 1 to 4.4 (mortality) in countries with increasing trend. Among women, Brazil, Spain and Cyprus had the greatest incidence increase, and all countries in Western, Southern and Eastern Europe reported increasing mortality. These findings highlighted the need for targeted preventive measures.

  15. [Depressive symptoms among medical intern students in a Brazilian public university].

    PubMed

    Costa, Edméa Fontes de Oliva; Santana, Ygo Santos; Santos, Ana Teresa Rodrigues de Abreu; Martins, Luiz Antonio Nogueira; Melo, Enaldo Vieira de; Andrade, Tarcísio Matos de

    2012-01-01

    To estimate, among Medical School intern students, the prevalence of depressive symptoms and their severity, as well as associated factors. Cross-sectional study in May 2008, with a representative sample of medical intern students (n = 84) from Universidade Federal de Sergipe (UFS). Beck Depression Inventory (BDI) and a structured questionnaire containing information on sociodemographic variables, teaching-learning process, and personal aspects were used. The exploratory data analysis was performed by descriptive and inferential statistics. Finally, the analysis of multiple variables by logistic regression and the calculation of simple and adjusted ORs with their respective 95% confidence intervals were performed. The general prevalence was 40.5%, with 1.2% (95% CI: 0.0-6.5) of severe depressive symptoms; 4.8% (95% CI: 1.3-11.7) of moderate depressive symptoms; and 34.5% (95% CI: 24.5-45.7) of mild depressive symptoms. The logistic regression revealed the variables with a major impact associated with the emergence of depressive symptoms: thoughts of dropping out (OR 6.24; p = 0.002); emotional stress (OR 7.43;p = 0.0004); and average academic performance (OR 4.74; p = 0.0001). The high prevalence of depressive symptoms in the study population was associated with variables related to the teaching-learning process and personal aspects, suggesting immediate preemptive measures regarding Medical School graduation and student care are required.

  16. Relationship between tinnitus and suicidal behaviour in Korean men and women: a cross-sectional study.

    PubMed

    Seo, J H; Kang, J M; Hwang, S H; Han, K D; Joo, Y H

    2016-06-01

    This study investigated the prevalence of suicidal ideation and behaviour in a representative sample of South Koreans with or without tinnitus. A cross-sectional study. Based on data from the 2010 to 2012 Korean National Health and Nutrition Examination Survey (KNHANES). The study included 17 446 Korean individuals. Participants provided demographic, socio-economic and behavioural information, as well as responses to questionnaires assessing the presence and severity of tinnitus, mental health status regarding stress, depression, and suicidal ideation and attempts. In the univariate analysis, the Rao-Scott chi-square test and logistic regression analysis were used to test the association between tinnitus and risk factors. Simple and multiple linear regression analyses were used to examine the association between tinnitus and mental status. A total of 20.9% and 1.2% of participants with tinnitus, and 12.2% and 0.6% of those without, reported suicidal ideation and attempts, respectively (P < 0.0001 and P = 0.001). Participants reporting suicide attempts showed a higher proportion of severe annoying (6.0%) and irritating (11.8%) tinnitus than those with suicidal ideation (1.4% and 10.2%, respectively). Risks for experiencing tinnitus were significantly associated with suicidal ideation and attempts after adjusting for confounding variables. This study has important implications for enhanced screening and evaluation of mental health status and suicidal ideation/behaviour among tinnitus patients. © 2015 John Wiley & Sons Ltd.

  17. A novel practical scoring for early diagnosis of traumatic bowel injury without obvious solid organ injury in hemodynamically stable patients.

    PubMed

    Zarour, Ahmad; El-Menyar, Ayman; Khattabi, Mazen; Tayyem, Raed; Hamed, Osama; Mahmood, Ismail; Abdelrahman, Husham; Chiu, William; Al-Thani, Hassan

    2014-01-01

    To develop a scoring tool based on clinical and radiological findings for early diagnosis and intervention in hemodynamically stable patients with traumatic bowel and mesenteric injury (TBMI) without obvious solid organ injury (SOI). A retrospective analysis was conducted for all traumatic abdominal injury patients in Qatar from 2008 to 2011. Data included demographics and clinical, radiological and operative findings. Multivariate logistic regression was performed to analyze the predictors for the need of therapeutic laparotomy. A total of 105 patients met the inclusion criteria with a mean age of 33 ± 15. Motor Vehicle Crashes (58%) and fall (21%) were the major MOI. Using Receiver operating characteristic curve, Z-score of >9 was the cutoff point (AUC = 0.98) for high probability of the presence of TBMI requiring surgical intervention. Z-Score >9 was found to have sensitivity (96.7%), specificity (97.4%), PPV (93.5%) and NPV (98.7%). Multivariate regression analysis found Z-score (>9) to be an independent predictor for the need of exploratory laparotomy (OR7.0; 95% CI: 2.46-19.78, p = 0.001). This novel tool for early diagnosis of TBMI is found to be simple and helpful in selecting stable patients with free intra-abdominal fluid without SOI for exploratory Laparotomy. However, further prospective studies are warranted. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  18. 100-point scale evaluating job satisfaction and the results of the 12-item General Health Questionnaire in occupational workers.

    PubMed

    Kawada, Tomoyuki; Yamada, Natsuki

    2012-01-01

    Job satisfaction is an important factor in the occupational lives of workers. In this study, the relationship between one-dimensional scale of job satisfaction and psychological wellbeing was evaluated. A total of 1,742 workers (1,191 men and 551 women) participated. 100-point scale evaluating job satisfaction (0 [extremely dissatisfied] to 100 [extremely satisfied]) and the General Health Questionnaire, 12-item version (GHQ-12) evaluating psychological wellbeing were used. A multiple regression analysis was then used, controlling for gender and age. The change in the GHQ-12 and job satisfaction scores after a two-year interval was also evaluated. The mean age for the subjects was 42.2 years for the men and 36.2 years for the women. The GHQ-12 and job satisfaction scores were significantly correlated in each generation. The partial correlation coefficients between the changes in the two variables, controlling for age, were -0.395 for men and -0.435 for women (p< 0.001). A multiple regression analysis revealed that the 100-point job satisfaction score was associated with the GHQ-12 results (p< 0.001). The adjusted multiple correlation coefficient was 0.275. The 100-point scale, which is a simple and easy tool for evaluating job satisfaction, was significantly associated with psychological wellbeing as judged using the GHQ-12.

  19. Genetic risk factors for ovarian cancer and their role for endometriosis risk.

    PubMed

    Burghaus, Stefanie; Fasching, Peter A; Häberle, Lothar; Rübner, Matthias; Büchner, Kathrin; Blum, Simon; Engel, Anne; Ekici, Arif B; Hartmann, Arndt; Hein, Alexander; Beckmann, Matthias W; Renner, Stefan P

    2017-04-01

    Several genetic variants have been validated as risk factors for ovarian cancer. Endometriosis has also been described as a risk factor for ovarian cancer. Identifying genetic risk factors that are common to the two diseases might help improve our understanding of the molecular pathogenesis potentially linking the two conditions. In a hospital-based case-control analysis, 12 single nucleotide polymorphisms (SNPs), validated by the Ovarian Cancer Association Consortium (OCAC) and the Collaborative Oncological Gene-environment Study (COGS) project, were genotyped using TaqMan® OpenArray™ analysis. The cases consisted of patients with endometriosis, and the controls were healthy individuals without endometriosis. A total of 385 cases and 484 controls were analyzed. Odds ratios and P values were obtained using simple logistic regression models, as well as from multiple logistic regression models with adjustment for clinical predictors. rs11651755 in HNF1B was found to be associated with endometriosis in this case-control study. The OR was 0.66 (95% CI, 0.51 to 0.84) and the P value after correction for multiple testing was 0.01. None of the other genotypes was associated with a risk for endometriosis. As rs11651755 in HNF1B modified both the ovarian cancer risk and also the risk for endometriosis, HNF1B may be causally involved in the pathogenetic pathway leading from endometriosis to ovarian cancer. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Modeling of Micro Deval abrasion loss based on some rock properties

    NASA Astrophysics Data System (ADS)

    Capik, Mehmet; Yilmaz, Ali Osman

    2017-10-01

    Aggregate is one of the most widely used construction material. The quality of the aggregate is determined using some testing methods. Among these methods, the Micro Deval Abrasion Loss (MDAL) test is commonly used for the determination of the quality and the abrasion resistance of aggregate. The main objective of this study is to develop models for the prediction of MDAL from rock properties, including uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness, apparent porosity, void ratio Cerchar abrasivity index and Bohme abrasion test are examined. Additionally, the MDAL is modeled using simple regression analysis and multiple linear regression analysis based on the rock properties. The study shows that the MDAL decreases with the increase of uniaxial compressive strength, Brazilian tensile strength, point load index, Schmidt rebound hardness and Cerchar abrasivity index. It is also concluded that the MDAL increases with the increase of apparent porosity, void ratio and Bohme abrasion test. The modeling results show that the models based on Bohme abrasion test and L type Schmidt rebound hardness give the better forecasting performances for the MDAL. More models, including the uniaxial compressive strength, the apparent porosity and Cerchar abrasivity index, are developed for the rapid estimation of the MDAL of the rocks. The developed models were verified by statistical tests. Additionally, it can be stated that the proposed models can be used as a forecasting for aggregate quality.

  1. Simple, validated vaginal birth after cesarean delivery prediction model for use at the time of admission.

    PubMed

    Metz, Torri D; Stoddard, Gregory J; Henry, Erick; Jackson, Marc; Holmgren, Calla; Esplin, Sean

    2013-09-01

    To create a simple tool for predicting the likelihood of successful trial of labor after cesarean delivery (TOLAC) during the pregnancy after a primary cesarean delivery using variables available at the time of admission. Data for all deliveries at 14 regional hospitals over an 8-year period were reviewed. Women with one cesarean delivery and one subsequent delivery were included. Variables associated with successful VBAC were identified using multivariable logistic regression. Points were assigned to these characteristics, with weighting based on the coefficients in the regression model to calculate an integer VBAC score. The VBAC score was correlated with TOLAC success rate and was externally validated in an independent cohort using a logistic regression model. A total of 5,445 women met inclusion criteria. Of those women, 1,170 (21.5%) underwent TOLAC. Of the women who underwent trial of labor, 938 (80%) had a successful VBAC. A VBAC score was generated based on the Bishop score (cervical examination) at the time of admission, with points added for history of vaginal birth, age younger than 35 years, absence of recurrent indication, and body mass index less than 30. Women with a VBAC score less than 10 had a likelihood of TOLAC success less than 50%. Women with a VBAC score more than 16 had a TOLAC success rate more than 85%. The model performed well in an independent cohort with an area under the curve of 0.80 (95% confidence interval 0.76-0.84). Prediction of TOLAC success at the time of admission is highly dependent on the initial cervical examination. This simple VBAC score can be utilized when counseling women considering TOLAC. II.

  2. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  3. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    ERIC Educational Resources Information Center

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  4. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  5. Estimates of Ground Temperature and Atmospheric Moisture from CERES Observations

    NASA Technical Reports Server (NTRS)

    Wu, Man Li C.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A method is developed to retrieve surface ground temperature (T(sub g)) and atmospheric moisture using clear sky fluxes (CSF) from CERES-TRMM observations. In general, the clear sky outgoing longwave radiation (CLR) is sensitive to upper level moisture (q(sub l)) over wet regions and (T(sub g)) over dry regions The clear sky window flux from 800 to 1200/cm (RadWn) is sensitive to low level moisture (q(sub t)) and T(sub g). Combining these two measurements (CLR and RadWn), Tg and q(sub h) can be estimated over land, while q(sub h) and q(sub l) can be estimated over the oceans. The approach capitalizes on the availability of satellite estimates of CLR and RadWn and other auxiliary satellite data. The basic methodology employs off-line forward radiative transfer calculations to generate synthetic CSF data from two different global 4-dimensional data assimilation products. Simple linear regression is used to relate discrepancies in CSF to discrepancies in T(sub g), q(sub h) and q(sub l). The slopes of the regression lines define sensitivity parameters that can be exploited to help interpret mismatches between satellite observations and model-based estimates of CSF. For illustration, we analyze the discrepancies in the CSF between an early implementation of the Goddard Earth Observing System Data Assimilation System (GEOS-DAS) and a recent operational version of the European Center for Medium-Range Weather Prediction data assimilation system. In particular, our analysis of synthetic total and window region SCF differences (computed from two different assimilated data sets) shows that simple linear regression employing Delta(T(sub g)) and broad layer Delta(q(sub l) from .500 hPa to surface and Delta(q(sub h)) from 200 to .300 hPa provides a good approximation to the full radiative transfer calculations. typically explaining more than 90% of the 6-hourly variance in the flux differences. These simple regression relations can be inverted to "retrieve" the errors in the geophysical parameters. Uncertainties (normalized by standard deviation) in the monthly mean retrieved parameters range from 7% for Delta(T(sub g)) to about 20% for Delta(q(sub l)). Our initial application of the methodology employed an early CERES-TRMM data set (CLR and Radwn) to assess the quality of the GEOS2 data. The results showed that over the tropical and subtropical oceans GEOS2 is, in general, too wet in the upper troposphere (mean bias of 0.99 mm) and too dry in the lower troposphere (mean bias of -4.7 min). We note that these errors, as well as a cold bias in the T(sub g). have largely been corrected in the current version of GEOS-2 with the introduction of a land surface model, a moist turbulence scheme and the assimilation of SSM/I total precipitable water.

  6. Comparative study of joint analysis of microarray gene expression data in survival prediction and risk assessment of breast cancer patients

    PubMed Central

    2016-01-01

    Abstract Microarray gene expression data sets are jointly analyzed to increase statistical power. They could either be merged together or analyzed by meta-analysis. For a given ensemble of data sets, it cannot be foreseen which of these paradigms, merging or meta-analysis, works better. In this article, three joint analysis methods, Z -score normalization, ComBat and the inverse normal method (meta-analysis) were selected for survival prognosis and risk assessment of breast cancer patients. The methods were applied to eight microarray gene expression data sets, totaling 1324 patients with two clinical endpoints, overall survival and relapse-free survival. The performance derived from the joint analysis methods was evaluated using Cox regression for survival analysis and independent validation used as bias estimation. Overall, Z -score normalization had a better performance than ComBat and meta-analysis. Higher Area Under the Receiver Operating Characteristic curve and hazard ratio were also obtained when independent validation was used as bias estimation. With a lower time and memory complexity, Z -score normalization is a simple method for joint analysis of microarray gene expression data sets. The derived findings suggest further assessment of this method in future survival prediction and cancer classification applications. PMID:26504096

  7. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  8. Continuous monitoring of fetal scalp temperature in labor: a new technology validated in a fetal lamb model.

    PubMed

    Lavesson, Tony; Amer-Wåhlin, Isis; Hansson, Stefan; Ley, David; Marsál, Karel; Olofsson, Per

    2010-06-01

    To evaluate a new technical equipment for continuous recording of human fetal scalp temperature in labor. Experimental animal study. Two temperature sensors were placed subcutaneously and intracranially on the forehead of 10 fetal lambs and connected to a temperature monitoring system. The system records temperatures simultaneously on-line and stores data to be analyzed off-line. Throughout the experiment, the fetus was oxygenated via the umbilical cord circulation. Asphyxia was induced by intermittent cord compression, as assessed by pH in jugular vein blood. The intracranial (ICT) and subcutaneous (SCT) temperatures were compared with simple and polynomial regression analyses. Absolute and delta ICT and SCT changes. ICT and SCT were both successfully recorded in all 10 cases. With increasing acidosis, the temperatures decreased. The correlation coefficient between ICT and SCT had a range of 0.76-0.97 (median 0.88) by simple linear regression and 0.80-0.99 (median 0.89) by second grade polynomial regression. After an initial system stabilization period of 10 minutes, the delta temperature values (ICT minus SCT) were less than 1.5 degrees C throughout the experiment in all but one case. The fetal forehead SCT mirrored the ICT closely, with the ICT being higher.

  9. Experimental and computational prediction of glass transition temperature of drugs.

    PubMed

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  10. Modification of Fox protocol for prediction of maximum oxygen uptake in male university students.

    PubMed

    Bandyopadhyay, Amit; Pal, Sangita

    2015-01-01

    Direct estimation of VO₂max involves labourious, exhaustive, hazardous, time consuming and expensive experimental protocols. Hence, application of various indirect protocols for prediction of VO₂max has become popular, subject to proper population-specific standardisation of the indirect protocol. Application of Fox (1973) protocol in male sedentary university students of Kolkata, India led to premature fatigue in their leg muscles that hindered the muscular activity leading to inability in completing the exercise. The present study was aimed at modifying and validating the Fox (1973) protocol with a convenient workload of 110 W (i.e., modified Fox test or MFT) in the said population. Ninety (90) sedentary male students were recruited by simple random sampling from the University of Calcutta, India and they were randomly assigned into study group (n=60) and confirmatory group (n=30). VO₂max was directly estimated by Scholander micro-gas analysis after incremental bicycle exercise. Predicted VO₂max (PVO₂max) was computed from MFT by using the submaximal heart rate (HR(sub). In the Study Group VO₂max (2216.63 ± 316.77 mL.min⁻¹ was significantly different (P< 0.001) from PVO₂max (3131.73 ± 234.32 mL.min⁻¹ measured by using the equation of Fox (1973). Simple and multiple regression equations have been computed for prediction of VO₂max from HR(sub) and physical parameters. Application of these norms in the confirmatory group depicted insignificant difference between VO₂max and PVO₂max with substantially small limits of agreement and lower values of SEE. The modified regression norms are therefore recommended for use in MFT for accurate assessment of VO₂max in the studied population.

  11. Effect of lecture instruction on student performance on qualitative questions

    NASA Astrophysics Data System (ADS)

    Heron, Paula R. L.

    2015-06-01

    The impact of lecture instruction on student conceptual understanding in physics has been the subject of research for several decades. Most studies have reported disappointingly small improvements in student performance on conceptual questions despite direct instruction on the relevant topics. These results have spurred a number of attempts to improve learning in physics courses through new curricula and instructional techniques. This paper contributes to the research base through a retrospective analysis of 20 randomly selected qualitative questions on topics in kinematics, dynamics, electrostatics, waves, and physical optics that have been given in introductory calculus-based physics at the University of Washington over a period of 15 years. In some classes, questions were administered after relevant lecture instruction had been completed; in others, it had yet to begin. Simple statistical tests indicate that the average performance of the "after lecture" classes was significantly better than that of the "before lecture" classes for 11 questions, significantly worse for two questions, and indistinguishable for the remaining seven. However, the classes had not been randomly assigned to be tested before or after lecture instruction. Multiple linear regression was therefore conducted with variables (such as class size) that could plausibly lead to systematic differences in performance and thus obscure (or artificially enhance) the effect of lecture instruction. The regression models support the results of the simple tests for all but four questions. In those cases, the effect of lecture instruction was reduced to a nonsignificant level, or increased to a significant, negative level when other variables were considered. Thus the results provide robust evidence that instruction in lecture can increase student ability to give correct answers to conceptual questions but does not necessarily do so; in some cases it can even lead to a decrease.

  12. Identification of the time-point which gives a plasma rabeprazole concentration that adequately reflects the area under the concentration-time curve.

    PubMed

    Niioka, Takenori; Uno, Tsukasa; Yasui-Furukori, Norio; Shimizu, Mikiko; Sugawara, Kazunobu; Tateishi, Tomonori

    2006-10-01

    The purpose of this study is to evaluate whether a simple formula using limited blood samples can predict the area under the plasma rabeprazole concentration-time curve (AUC) in co-administration with CYP inhibitors. A randomized double-blind placebo-controlled crossover study design in three phases was conducted at intervals of 2 weeks. Twenty-one healthy Japanese volunteers, including three CYP2C19 genotype groups, took a single oral 20-mg dose of rabeprazole after three 6-day pretreatments, i.e., clarithromycin 800 mg/day, fluvoxamine 50 mg/day, and placebo. Prediction formulas of the AUC were derived from pharmacokinetics data of 21 subjects in three phases using multiple linear regression analysis. Ten blood samples were collected over 24 h to calculate AUC. Plasma concentrations of rabeprazole was measured by an HPLC-assay (l.l.q.=1 ng/ml). The AUC was based on all the data sets (n=63). The linear regression using two points (C3 and C6) could predict AUC(0-infinity) precisely, irrespective of CYP2C19 genotypes and CYP inhibitors (AUC(0-infinity)=1.39xC3+7.17xC6+344.14, r (2)=0.825, p<0.001). The present study demonstrated that the AUC of rabeprazole can be estimated by the simple formula using two-point concentrations. This formula can be more accurate for the prediction of AUC estimation than that reflected by CYP2C19 genotypes without any determination, even if there are significant differences for the CYP2C19 genotypes. Therefore, this prediction formula might be useful to evaluate whether CYP2C19 genotypes really reflects the curative effect of rabeprazole.

  13. Retrospective lifetime dietary patterns predict cognitive performance in community-dwelling older Australians.

    PubMed

    Hosking, Diane E; Nettelbeck, Ted; Wilson, Carlene; Danthiir, Vanessa

    2014-07-28

    Dietary intake is a modifiable exposure that may have an impact on cognitive outcomes in older age. The long-term aetiology of cognitive decline and dementia, however, suggests that the relevance of dietary intake extends across the lifetime. In the present study, we tested whether retrospective dietary patterns from the life periods of childhood, early adulthood, adulthood and middle age predicted cognitive performance in a cognitively healthy sample of 352 older Australian adults >65 years. Participants completed the Lifetime Diet Questionnaire and a battery of cognitive tests designed to comprehensively assess multiple cognitive domains. In separate regression models, lifetime dietary patterns were the predictors of cognitive factor scores representing ten constructs derived by confirmatory factor analysis of the cognitive test battery. All regression models were progressively adjusted for the potential confounders of current diet, age, sex, years of education, English as native language, smoking history, income level, apoE ɛ4 status, physical activity, other past dietary patterns and health-related variables. In the adjusted models, lifetime dietary patterns predicted cognitive performance in this sample of older adults. In models additionally adjusted for intake from the other life periods and mechanistic health-related variables, dietary patterns from the childhood period alone reached significance. Higher consumption of the 'coffee and high-sugar, high-fat extras' pattern predicted poorer performance on simple/choice reaction time, working memory, retrieval fluency, short-term memory and reasoning. The 'vegetable and non-processed' pattern negatively predicted simple/choice reaction time, and the 'traditional Australian' pattern positively predicted perceptual speed and retrieval fluency. Identifying early-life dietary antecedents of older-age cognitive performance contributes to formulating strategies for delaying or preventing cognitive decline.

  14. Predicting the duration of sickness absence for patients with common mental disorders in occupational health care.

    PubMed

    Nieuwenhuijsen, Karen; Verbeek, Jos H A M; de Boer, Angela G E M; Blonk, Roland W B; van Dijk, Frank J H

    2006-02-01

    This study attempted to determine the factors that best predict the duration of absence from work among employees with common mental disorders. A cohort of 188 employees, of whom 102 were teachers, on sick leave with common mental disorders was followed for 1 year. Only information potentially available to the occupational physician during a first consultation was included in the predictive model. The predictive power of the variables was tested using Cox's regression analysis with a stepwise backward selection procedure. The hazard ratios (HR) from the final model were used to deduce a simple prediction rule. The resulting prognostic scores were then used to predict the probability of not returning to work after 3, 6, and 12 months. Calculating the area under the curve from the ROC (receiver operating characteristic) curve tested the discriminative ability of the prediction rule. The final Cox's regression model produced the following four predictors of a longer time until return to work: age older than 50 years [HR 0.5, 95% confidence interval (95% CI) 0.3-0.8], expectation of duration absence longer than 3 months (HR 0.5, 95% CI 0.3-0.8), higher educational level (HR 0.5, 95% CI 0.3-0.8), and diagnosis depression or anxiety disorder (HR 0.7, 95% CI 0.4-0.9). The resulting prognostic score yielded areas under the curves ranging from 0.68 to 0.73, which represent acceptable discrimination of the rule. A prediction rule based on four simple variables can be used by occupational physicians to identify unfavorable cases and to predict the duration of sickness absence.

  15. Increased risk of kidney damage among Chinese adults with simple renal cyst.

    PubMed

    Kong, Xianglei; Ma, Xiaojing; Zhang, Chengyin; Su, Hong; Gong, Xiaojie; Xu, Dongmei

    2018-05-04

    The presence of simple renal cyst (SRC) has been related to hypertension, the early and long-term allograft function, and aortic disease, but the relationship with kidney damage was still controversial. Accordingly, we conducted a large sample cross-sectional study to explore the association of SRC with indicators of kidney damage among Chinese adults. A total of 42,369 adults (aged 45.8 ± 13.67 years, 70.6% males) who visited the Health Checkup Clinic were consecutively enrolled. SRC was assessed by ultrasonography according to Bosniak category. Multiple regression models were applied to explore the relationships between SRC and indicators of kidney damage [proteinuria (dipstick urine protein ≥ 1+) and decreased estimated glomerular filtration rate (DeGFR) < 60 ml/min/1.73 m 2 ]. Among all participants in the study, the prevalence of SRC was 10.5%. As a categorical outcome, participants with more 1 cyst and with 1 cyst had higher percentage of proteinuria [53 (5.3%) and 93 (2.7%) vs. 596 (1.6%), p < 0.001] and DeGFR [57 (5.7%) and 85 (2.5%) vs. 278 (0.7%), p < 0.001] compared with participants with no cyst. SRC significantly correlated with proteinuria [OR 1.59 (95% CI 1.30-1.95)] and DeGFR [OR 1.97 (95% CI 1.56-2.47)] after adjusting for potential confounders. Furthermore, the results also demonstrated that maximum diameter (per 1 cm increase), bilateral location, and multiple cysts significantly correlated with DeGFR in the multiple logistic regression analysis. The study revealed that SRC significantly correlated with kidney damage and special attention should be paid among Chinese adults with SRC.

  16. Association of lung function and chronic obstructive pulmonary disease with American Heart Association's Life's Simple 7 cardiovascular health metrics.

    PubMed

    Fan, Wenjun; Lee, Hwa; Lee, Angela; Kieu, Chi; Wong, Nathan D

    2017-10-01

    Chronic obstructive pulmonary disease (COPD) is the third leading cause of death in the U.S. There is a strong association between COPD and cardiovascular (CV) disease; however, the relation between COPD and CV health factors is not well defined. We examined the relation between lung function and CV health factors defined by American Heart Association's (AHA) Life's Simple 7 (LS7). We studied 6352 adults aged ≥20 from the National Health and Nutrition Examination Survey (NHANES) 2009-2012. Analysis of variance was used to compare mean FEV1% of predicted across levels of each LS7 metric and population attributable risk was calculated based on COPD prevalence. We also conducted linear regression and logistic regression analyses to determine the association between lung function, COPD and LS7 score. Overall 19.9% of subjects were defined as having COPD. Subjects in the highest categories of the LS7 metrics had the highest mean values of FEV1% of predicted (p < 0.0001 except for total cholesterol). Current smoking and hypertension had a population attributed risk of 21.8% and 21.1% of COPD, respectively. Compared to subjects with 0 ideal health factors, the gender and ethnicity-adjusted odds (95% CI) for COPD were 0.45 (0.22-0.93), 0.22 (0.11-0.43) for those with 4 and 5-7 factors, but adjustment for age attenuated this relation. LS7 score is associated with lung function as well as the odds of COPD that is largely explained by age. Studies are needed to show if promotion of CV health will preserve healthy lung function. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Simple, simultaneous gravimetric determination of calcite and dolomite in calcareous soils

    USDA-ARS?s Scientific Manuscript database

    Literature pertaining to determination of calcite and dolomite is not modern and describes slow methods that require expensive specialized apparatus. The objective of this paper was to describe a new method that requires no specialized equipment. Linear regressions and correlation coefficients for...

  18. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  19. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  20. Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.

    PubMed

    Estevis, Eduardo; Basso, Michael R; Combs, Dennis

    2012-01-01

    A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.

  1. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  2. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  3. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  4. Predicting the risk of malignancy in adnexal masses based on the Simple Rules from the International Ovarian Tumor Analysis group.

    PubMed

    Timmerman, Dirk; Van Calster, Ben; Testa, Antonia; Savelli, Luca; Fischerova, Daniela; Froyman, Wouter; Wynants, Laure; Van Holsbeke, Caroline; Epstein, Elisabeth; Franchi, Dorella; Kaijser, Jeroen; Czekierdowski, Artur; Guerriero, Stefano; Fruscio, Robert; Leone, Francesco P G; Rossi, Alberto; Landolfo, Chiara; Vergote, Ignace; Bourne, Tom; Valentin, Lil

    2016-04-01

    Accurate methods to preoperatively characterize adnexal tumors are pivotal for optimal patient management. A recent metaanalysis concluded that the International Ovarian Tumor Analysis algorithms such as the Simple Rules are the best approaches to preoperatively classify adnexal masses as benign or malignant. We sought to develop and validate a model to predict the risk of malignancy in adnexal masses using the ultrasound features in the Simple Rules. This was an international cross-sectional cohort study involving 22 oncology centers, referral centers for ultrasonography, and general hospitals. We included consecutive patients with an adnexal tumor who underwent a standardized transvaginal ultrasound examination and were selected for surgery. Data on 5020 patients were recorded in 3 phases from 2002 through 2012. The 5 Simple Rules features indicative of a benign tumor (B-features) and the 5 features indicative of malignancy (M-features) are based on the presence of ascites, tumor morphology, and degree of vascularity at ultrasonography. Gold standard was the histopathologic diagnosis of the adnexal mass (pathologist blinded to ultrasound findings). Logistic regression analysis was used to estimate the risk of malignancy based on the 10 ultrasound features and type of center. The diagnostic performance was evaluated by area under the receiver operating characteristic curve, sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), positive predictive value (PPV), negative predictive value (NPV), and calibration curves. Data on 4848 patients were analyzed. The malignancy rate was 43% (1402/3263) in oncology centers and 17% (263/1585) in other centers. The area under the receiver operating characteristic curve on validation data was very similar in oncology centers (0.917; 95% confidence interval, 0.901-0.931) and other centers (0.916; 95% confidence interval, 0.873-0.945). Risk estimates showed good calibration. In all, 23% of patients in the validation data set had a very low estimated risk (<1%) and 48% had a high estimated risk (≥30%). For the 1% risk cutoff, sensitivity was 99.7%, specificity 33.7%, LR+ 1.5, LR- 0.010, PPV 44.8%, and NPV 98.9%. For the 30% risk cutoff, sensitivity was 89.0%, specificity 84.7%, LR+ 5.8, LR- 0.13, PPV 75.4%, and NPV 93.9%. Quantification of the risk of malignancy based on the Simple Rules has good diagnostic performance both in oncology centers and other centers. A simple classification based on these risk estimates may form the basis of a clinical management system. Patients with a high risk may benefit from surgery by a gynecological oncologist, while patients with a lower risk may be managed locally. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  6. Using Dominance Analysis to Determine Predictor Importance in Logistic Regression

    ERIC Educational Resources Information Center

    Azen, Razia; Traxel, Nicole

    2009-01-01

    This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…

  7. Multi-scaling allometric analysis for urban and regional development

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2017-01-01

    The concept of allometric growth is based on scaling relations, and it has been applied to urban and regional analysis for a long time. However, most allometric analyses were devoted to the single proportional relation between two elements of a geographical system. Few researches focus on the allometric scaling of multielements. In this paper, a process of multiscaling allometric analysis is developed for the studies on spatio-temporal evolution of complex systems. By means of linear algebra, general system theory, and by analogy with the analytical hierarchy process, the concepts of allometric growth can be integrated with the ideas from fractal dimension. Thus a new methodology of geo-spatial analysis and the related theoretical models emerge. Based on the least squares regression and matrix operations, a simple algorithm is proposed to solve the multiscaling allometric equation. Applying the analytical method of multielement allometry to Chinese cities and regions yields satisfying results. A conclusion is reached that the multiscaling allometric analysis can be employed to make a comprehensive evaluation for the relative levels of urban and regional development, and explain spatial heterogeneity. The notion of multiscaling allometry may enrich the current theory and methodology of spatial analyses of urban and regional evolution.

  8. RRegrs: an R package for computer-aided model selection with multiple regression models.

    PubMed

    Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L

    2015-01-01

    Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.

  9. The information filter: how dentists use diet diary information to give patients clear and simple advice.

    PubMed

    Arheiam, Arheiam; Brown, Stephen L; Higham, Susan M; Albadri, Sondos; Harris, Rebecca V

    2016-12-01

    Diet diaries are recommended for dentists to monitor children's sugar consumption. Diaries provide multifaceted dietary information, but patients respond better to simpler advice. We explore how dentists integrate information from diet diaries to deliver useable advice to patients. As part of a questionnaire study of general dental practitioners (GDPs) in Northwest England, we asked dentists to specify the advice they would give a hypothetical patient based upon a diet diary case vignette. A sequential mixed method approach was used for data analysis: an initial inductive content analysis (ICA) to develop coding system to capture the complexity of dietary assessment and delivered advice. Using these codes, a quantitative analysis was conducted to examine correspondences between identified dietary problems and advice given. From these correspondences, we inferred how dentists reduced problems to give simple advice. A total of 229 dentists' responses were analysed. ICA on 40 questionnaires identified two distinctive approaches of developing diet advice: a summative (summary of issues into an all-encompassing message) and a selective approach (selection of a main message approach). In the quantitative analysis of all responses, raw frequencies indicated that dentists saw more problems than they advised on and provided highly specific advice on a restricted number of problems (e.g. not eating sugars before bedtime 50.7% or harmful items 42.4%, rather than simply reducing the amount of sugar 9.2%). Binary logistic regression models indicate that dentists provided specific advice that was tailored to the key problems that they identified. Dentists provided specific recommendations to address what they felt were key problems, whilst not intervening to address other problems that they may have felt less pressing. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. New liquid chromatography-tandem mass spectrometry method for routine TDM of vancomycin in patients with both normal and impaired renal functions and comparison with results of polarization fluoroimmunoassay in light of varying creatinine concentrations.

    PubMed

    Brozmanová, Hana; Kacířová, Ivana; Uřinovská, Romana; Šištík, Pavel; Grundmann, Milan

    2017-06-01

    A new LC-MS/MS method with simple sample extraction and a relatively short period of vancomycin analysis for routine therapeutic drug monitoring was developed and validated. 50μL serum was precipitated using 20μL 33% trichloroacetic acid and 0.5mol/L NH 4 OH was added to increase pH before analysis. A RP BEH C18, 1.7μm, 2.1×50mm column maintained at 30°C and tobramycin as internal standard were used. Mass detection was performed in positive electrospray mode. The results obtained with LC-MS/MS method were correlated with an FPIA assay (Abbott AxSYM) using mouse monoclonal antibody. Subjects were divided into three groups according to creatinine levels (53.5±19.1, 150.2±48.4, 471.7±124.7μmol/L) and Passing-Bablok regression analysis and Bland-Altman analysis were used to compare vancomycin concentrations. The results of subjects with both normal and higher creatinine levels correlated very well and the linear regression model equations were near ideal (LC-MS VAN =0.947×Abbott VAN +0.192 and LC-MS VAN =0.973×Abbott VAN -0.411 respectively). Dialyzed patients with the highest creatinine levels showed about 14% greater vancomycin concentration with the FPIA assay (LC-MS VAN =0.866×Abbott VAN +2.127). This overestimation probably due to the presence of the metabolite CDP ought not to be of clinical relevance owing to the wide range of recommended vancomycin concentration. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The Short-term Prognostic Value of the Triglyceride-to-high-density Lipoprotein Cholesterol Ratio in Acute Ischemic Stroke

    PubMed Central

    Wang, Huan; Lei, Leix; Zhang, Han-Qing; Gu, Zheng-Tian; Xing, Fang-Lan; Yan, Fu-Ling

    2018-01-01

    The triglyceride (TG)-to-high-density lipoprotein cholesterol (HDL-C) ratio (TG/HDL-C) is a simple approach to predicting unfavorable outcomes in cardiovascular disease. The influence of TG/HDL-C on acute ischemic stroke remains elusive. The purpose of this study was to investigate the precise effect of TG/HDL-C on 3-month mortality after acute ischemic stroke (AIS). Patients with AIS were enrolled in the present study from 2011 to 2017. A total of 1459 participants from a single city in China were divided into retrospective training and prospective test cohorts. Medical records were collected periodically to determine the incidence of fatal events. All participants were followed for 3 months. Optimal cutoff values were determined using X-tile software to separate the training cohort patients into higher and lower survival groups based on their lipid levels. A survival analysis was conducted using Kaplan-Meier curves and a Cox proportional hazards regression model. A total of 1459 patients with AIS (median age 68.5 years, 58.5% male) were analyzed. Univariate Cox regression analysis confirmed that TG/HDL-C was a significant prognostic factor for 3-month survival. X-tile identified 0.9 as an optimal cutoff for TG/HDL-C. In the univariate analysis, the prognosis of the TG/HDL-C >0.9 group was markedly superior to that of TG/HDL-C ≤0.9 group (P<0.001). A multivariate Cox regression analysis showed that TG/HDL-C was independently correlated with a reduced risk of mortality (hazard ratio [HR], 0.39; 95% confidence interval [CI], 0.24-0.62; P<0.001). These results were confirmed in the 453 patients in the test cohort. A nomogram was constructed to predict 3-month case-fatality, and the c-indexes of predictive accuracy were 0.684 and 0.670 in the training and test cohorts, respectively (P<0.01). The serum TG/HDL-C ratio may be useful for predicting short-term mortality after AIS. PMID:29896437

  12. The Short-term Prognostic Value of the Triglyceride-to-high-density Lipoprotein Cholesterol Ratio in Acute Ischemic Stroke.

    PubMed

    Deng, Qi-Wen; Li, Shuo; Wang, Huan; Lei, Leix; Zhang, Han-Qing; Gu, Zheng-Tian; Xing, Fang-Lan; Yan, Fu-Ling

    2018-06-01

    The triglyceride (TG)-to-high-density lipoprotein cholesterol (HDL-C) ratio (TG/HDL-C) is a simple approach to predicting unfavorable outcomes in cardiovascular disease. The influence of TG/HDL-C on acute ischemic stroke remains elusive. The purpose of this study was to investigate the precise effect of TG/HDL-C on 3-month mortality after acute ischemic stroke (AIS). Patients with AIS were enrolled in the present study from 2011 to 2017. A total of 1459 participants from a single city in China were divided into retrospective training and prospective test cohorts. Medical records were collected periodically to determine the incidence of fatal events. All participants were followed for 3 months. Optimal cutoff values were determined using X-tile software to separate the training cohort patients into higher and lower survival groups based on their lipid levels. A survival analysis was conducted using Kaplan-Meier curves and a Cox proportional hazards regression model. A total of 1459 patients with AIS (median age 68.5 years, 58.5% male) were analyzed. Univariate Cox regression analysis confirmed that TG/HDL-C was a significant prognostic factor for 3-month survival. X-tile identified 0.9 as an optimal cutoff for TG/HDL-C. In the univariate analysis, the prognosis of the TG/HDL-C >0.9 group was markedly superior to that of TG/HDL-C ≤0.9 group (P<0.001). A multivariate Cox regression analysis showed that TG/HDL-C was independently correlated with a reduced risk of mortality (hazard ratio [HR], 0.39; 95% confidence interval [CI], 0.24-0.62; P<0.001). These results were confirmed in the 453 patients in the test cohort. A nomogram was constructed to predict 3-month case-fatality, and the c-indexes of predictive accuracy were 0.684 and 0.670 in the training and test cohorts, respectively (P<0.01). The serum TG/HDL-C ratio may be useful for predicting short-term mortality after AIS.

  13. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  14. Simultaneous kinetic spectrometric determination of three flavonoid antioxidants in fruit with the aid of chemometrics

    NASA Astrophysics Data System (ADS)

    Sun, Ruiling; Wang, Yong; Ni, Yongnian; Kokot, Serge

    2014-03-01

    A simple, inexpensive and sensitive kinetic spectrophotometric method was developed for the simultaneous determination of three anti-carcinogenic flavonoids: catechin, quercetin and naringenin, in fruit samples. A yellow chelate product was produced in the presence neocuproine and Cu(I) - a reduction product of the reaction between the flavonoids with Cu(II), and this enabled the quantitative measurements with UV-vis spectrophotometry. The overlapping spectra obtained, were resolved with chemometrics calibration models, and the best performing method was the fast independent component analysis (fast-ICA/PCR (Principal component regression)); the limits of detection were 0.075, 0.057 and 0.063 mg L-1 for catechin, quercetin and naringenin, respectively. The novel method was found to outperform significantly the common HPLC procedure.

  15. Detailed emission profiles for on-road vehicles derived from ambient measurements during a windless traffic episode in Baltimore using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.

    2013-12-01

    Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.

  16. An analysis of the effect of biological and physical parameters of a wetlands grass biome on the spectral modeling of phytomass and primary productivity

    NASA Technical Reports Server (NTRS)

    Butera, M. K.; Frick, A.

    1984-01-01

    Aircraft simulated thematic mapper data and field data were acquired in the fall and spring to analyze the relationship of spectral response and biomass for the marsh grass Spartina patens. Regression results indicate no simple relationship exists for TMS spectral response and biomass with a high R sq. However, results show a consistent relationship between spectral response and the percent live vegetation (by weight) and percent interstitial standing surface water (by area) as independent variables. It is suggested that the reflected energy of a pixel represents a mixture of surface constituents. It is recommended that alternative remote sensors be employed to account for the pixel constituents of live and dead vegetation, litter, and standing water.

  17. [Assessment for effect of low level lead-exposure on neurobehavior in workers of printing house].

    PubMed

    Niu, Q; Dai, F; Chen, Y

    1998-11-30

    WHO Neurobehavioral Core Test Battery (NCTB) was conducted among 28 lead-exposed workers (mean age 24.84, SD2.85) in printing house and 46 controls (mean age 22.78, SD1.45), in order to assess whether low level lead exposure may be related to neurobehavioral dysfunction. The items of test were: 1. Profile of mood state(POMS), (2) Simple reaction time, (3) Digit span, (4) Santa Anna manual dexterity, (5) Digit simbol, (6) Benton visual retention; and Prusuit aiming test. In all the NCTB test values, there was no significant difference between two groups. Multiple stepwise regression analysis shows that exposure duration is related to neurobehavior scores. Mild lead exposure may affect neurobehavior in some degree but not significant.

  18. Computational prediction of the pKas of small peptides through Conceptual DFT descriptors

    NASA Astrophysics Data System (ADS)

    Frau, Juan; Hernández-Haro, Noemí; Glossman-Mitnik, Daniel

    2017-03-01

    The experimental pKa of a group of simple amines have been plotted against several Conceptual DFT descriptors calculated by means of different density functionals, basis sets and solvation schemes. It was found that the best fits are those that relate the pKa of the amines with the global hardness η through the MN12SX density functional in connection with the Def2TZVP basis set and the SMD solvation model, using water as a solvent. The parameterized equation resulting from the linear regression analysis has then been used for the prediction of the pKa of small peptides of interest in the study of diabetes and Alzheimer disease. The accuracy of the results is relatively good, with a MAD of 0.36 units of pKa.

  19. Browning of the landscape of interior Alaska based on 1986-2009 Landsat sensor NDVI

    Treesearch

    Rebecca A. Baird; David Verbyla; Teresa N. Hollingsworth

    2012-01-01

    We used a time series of 1986-2009 Landsat sensor data to compute the Normalized Difference Vegetation Index (NDVI) for 30 m pixels within the Bonanza Creek Experimental Forest of interior Alaska. Based on simple linear regression, we found significant (p

  20. Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…

Top