Franklin, Jessica M; Eddings, Wesley; Glynn, Robert J; Schneeweiss, Sebastian
2015-10-01
Selection and measurement of confounders is critical for successful adjustment in nonrandomized studies. Although the principles behind confounder selection are now well established, variable selection for confounder adjustment remains a difficult problem in practice, particularly in secondary analyses of databases. We present a simulation study that compares the high-dimensional propensity score algorithm for variable selection with approaches that utilize direct adjustment for all potential confounders via regularized regression, including ridge regression and lasso regression. Simulations were based on 2 previously published pharmacoepidemiologic cohorts and used the plasmode simulation framework to create realistic simulated data sets with thousands of potential confounders. Performance of methods was evaluated with respect to bias and mean squared error of the estimated effects of a binary treatment. Simulation scenarios varied the true underlying outcome model, treatment effect, prevalence of exposure and outcome, and presence of unmeasured confounding. Across scenarios, high-dimensional propensity score approaches generally performed better than regularized regression approaches. However, including the variables selected by lasso regression in a regular propensity score model also performed well and may provide a promising alternative variable selection method.
On regression adjustment for the propensity score.
Vansteelandt, S; Daniel, R M
2014-10-15
Propensity scores are widely adopted in observational research because they enable adjustment for high-dimensional confounders without requiring models for their association with the outcome of interest. The results of statistical analyses based on stratification, matching or inverse weighting by the propensity score are therefore less susceptible to model extrapolation than those based solely on outcome regression models. This is attractive because extrapolation in outcome regression models may be alarming, yet difficult to diagnose, when the exposed and unexposed individuals have very different covariate distributions. Standard regression adjustment for the propensity score forms an alternative to the aforementioned propensity score methods, but the benefits of this are less clear because it still involves modelling the outcome in addition to the propensity score. In this article, we develop novel insights into the properties of this adjustment method. We demonstrate that standard tests of the null hypothesis of no exposure effect (based on robust variance estimators), as well as particular standardised effects obtained from such adjusted regression models, are robust against misspecification of the outcome model when a propensity score model is correctly specified; they are thus not vulnerable to the aforementioned problem of extrapolation. We moreover propose efficient estimators for these standardised effects, which retain a useful causal interpretation even when the propensity score model is misspecified, provided the outcome regression model is correctly specified.
Energy adjustment methods applied to alcohol analyses.
Johansen, Ditte; Andersen, Per K; Overvad, Kim; Jensen, Gorm; Schnohr, Peter; Sørensen, Thorkild I A; Grønbaek, Morten
2003-01-01
When alcohol consumption is related to outcome, associations between alcohol type and health outcomes may occur simply because of the ethanol in the beverage type. When one analyzes the consequences of consumption of beer, wine, and spirits, the total alcohol intake must therefore be taken into account. However, owing to the linear dependency between total alcohol intake and the alcohol content of each beverage type, the effects cannot be separated from each other or from the effect of ethanol. In nutritional epidemiology, similar problems regarding intake of macronutrients and total energy intake have been addressed, and four methods have been proposed to solve the problem: energy partition, standard, density, and residual. The aim of this study was to evaluate the usefulness of the energy adjustment methods in alcohol analyses by using coronary heart disease as an example. Data obtained from the Copenhagen City Heart Study were used. The standard and energy partition methods yielded similar results for continuous, and almost similar results for categorical, alcohol variables. The results from the density method differed, but nevertheless were concordant with these. Beer and wine drinkers, in comparison with findings for nondrinkers, had lower risk of coronary heart disease. Except for the case of men drinking beer, the effect seemed to be associated with drinking one drink per week. The standard method derives influence of substituting alcohol types at constant total alcohol intake and complements the estimates of adding consumption of a particular alcohol type to the total intake. For most diseases, the effect of ethanol predominates over that of substances in the beverage type, which makes the density method less relevant in alcohol analyses.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Buchner, Florian; Wasem, Jürgen; Schillo, Sonja
2017-01-01
Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R(2) from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R(2) improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd.
Assessing Longitudinal Change: Adjustment for Regression to the Mean Effects
ERIC Educational Resources Information Center
Rocconi, Louis M.; Ethington, Corinna A.
2009-01-01
Pascarella (J Coll Stud Dev 47:508-520, 2006) has called for an increase in use of longitudinal data with pretest-posttest design when studying effects on college students. However, such designs that use multiple measures to document change are vulnerable to an important threat to internal validity, regression to the mean. Herein, we discuss a…
Adaptive Modeling: An Approach for Incorporating Nonlinearity in Regression Analyses.
Knafl, George J; Barakat, Lamia P; Hanlon, Alexandra L; Hardie, Thomas; Knafl, Kathleen A; Li, Yimei; Deatrick, Janet A
2017-02-01
Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.© 2017 Wiley Periodicals, Inc.
Coercively Adjusted Auto Regression Model for Forecasting in Epilepsy EEG
Kim, Sun-Hee; Faloutsos, Christos; Yang, Hyung-Jeong
2013-01-01
Recently, data with complex characteristics such as epilepsy electroencephalography (EEG) time series has emerged. Epilepsy EEG data has special characteristics including nonlinearity, nonnormality, and nonperiodicity. Therefore, it is important to find a suitable forecasting method that covers these special characteristics. In this paper, we propose a coercively adjusted autoregression (CA-AR) method that forecasts future values from a multivariable epilepsy EEG time series. We use the technique of random coefficients, which forcefully adjusts the coefficients with −1 and 1. The fractal dimension is used to determine the order of the CA-AR model. We applied the CA-AR method reflecting special characteristics of data to forecast the future value of epilepsy EEG data. Experimental results show that when compared to previous methods, the proposed method can forecast faster and accurately. PMID:23710252
Adjustment of regional regression equations for urban storm-runoff quality using at-site data
Barks, C.S.
1996-01-01
Regional regression equations have been developed to estimate urban storm-runoff loads and mean concentrations using a national data base. Four statistical methods using at-site data to adjust the regional equation predictions were developed to provide better local estimates. The four adjustment procedures are a single-factor adjustment, a regression of the observed data against the predicted values, a regression of the observed values against the predicted values and additional local independent variables, and a weighted combination of a local regression with the regional prediction. Data collected at five representative storm-runoff sites during 22 storms in Little Rock, Arkansas, were used to verify, and, when appropriate, adjust the regional regression equation predictions. Comparison of observed values of stormrunoff loads and mean concentrations to the predicted values from the regional regression equations for nine constituents (chemical oxygen demand, suspended solids, total nitrogen as N, total ammonia plus organic nitrogen as N, total phosphorus as P, dissolved phosphorus as P, total recoverable copper, total recoverable lead, and total recoverable zinc) showed large prediction errors ranging from 63 percent to more than several thousand percent. Prediction errors for 6 of the 18 regional regression equations were less than 100 percent and could be considered reasonable for water-quality prediction equations. The regression adjustment procedure was used to adjust five of the regional equation predictions to improve the predictive accuracy. For seven of the regional equations the observed and the predicted values are not significantly correlated. Thus neither the unadjusted regional equations nor any of the adjustments were appropriate. The mean of the observed values was used as a simple estimator when the regional equation predictions and adjusted predictions were not appropriate.
Multiple regression analyses in the prediction of aerospace instrument costs
NASA Astrophysics Data System (ADS)
Tran, Linh
The aerospace industry has been investing for decades in ways to improve its efficiency in estimating the project life cycle cost (LCC). One of the major focuses in the LCC is the cost/prediction of aerospace instruments done during the early conceptual design phase of the project. The accuracy of early cost predictions affects the project scheduling and funding, and it is often the major cause for project cost overruns. The prediction of instruments' cost is based on the statistical analysis of these independent variables: Mass (kg), Power (watts), Instrument Type, Technology Readiness Level (TRL), Destination: earth orbiting or planetary, Data rates (kbps), Number of bands, Number of channels, Design life (months), and Development duration (months). This author is proposing a cost prediction approach of aerospace instruments based on these statistical analyses: Clustering Analysis, Principle Components Analysis (PCA), Bootstrap, and multiple regressions (both linear and non-linear). In the proposed approach, the Cost Estimating Relationship (CER) will be developed for the dependent variable Instrument Cost by using a combination of multiple independent variables. "The Full Model" will be developed and executed to estimate the full set of nine variables. The SAS program, Excel, Automatic Cost Estimating Integrate Tool (ACEIT) and Minitab are the tools to aid the analysis. Through the analysis, the cost drivers will be identified which will help develop an ultimate cost estimating software tool for the Instrument Cost prediction and optimization of future missions.
Lyles, Robert H; Tang, Li; Superak, Hillary M; King, Caroline C; Celentano, David D; Lo, Yungtai; Sobel, Jack D
2011-07-01
Misclassification of binary outcome variables is a known source of potentially serious bias when estimating adjusted odds ratios. Although researchers have described frequentist and Bayesian methods for dealing with the problem, these methods have seldom fully bridged the gap between statistical research and epidemiologic practice. In particular, there have been few real-world applications of readily grasped and computationally accessible methods that make direct use of internal validation data to adjust for differential outcome misclassification in logistic regression. In this paper, we illustrate likelihood-based methods for this purpose that can be implemented using standard statistical software. Using main study and internal validation data from the HIV Epidemiology Research Study, we demonstrate how misclassification rates can depend on the values of subject-specific covariates, and we illustrate the importance of accounting for this dependence. Simulation studies confirm the effectiveness of the maximum likelihood approach. We emphasize clear exposition of the likelihood function itself, to permit the reader to easily assimilate appended computer code that facilitates sensitivity analyses as well as the efficient handling of main/external and main/internal validation-study data. These methods are readily applicable under random cross-sectional sampling, and we discuss the extent to which the main/internal analysis remains appropriate under outcome-dependent (case-control) sampling.
Hierarchical regression for epidemiologic analyses of multiple exposures
Greenland, S.
1994-11-01
Many epidemiologic investigations are designed to study the effects of multiple exposures. Most of these studies are analyzed either by fitting a risk-regression model with all exposures forced in the model, or by using a preliminary-testing algorithm, such as stepwise regression, to produce a smaller model. Research indicates that hierarchical modeling methods can outperform these conventional approaches. These methods are reviewed and compared to two hierarchical methods, empirical-Bayes regression and a variant here called {open_quotes}semi-Bayes{close_quotes} regression, to full-model maximum likelihood and to model reduction by preliminary testing. The performance of the methods in a problem of predicting neonatal-mortality rates are compared. Based on the literature to date, it is suggested that hierarchical methods should become part of the standard approaches to multiple-exposure studies. 35 refs., 1 fig., 1 tab.
Multiple regression analyses in clinical child and adolescent psychology.
Jaccard, James; Guilamo-Ramos, Vincent; Johansson, Margaret; Bouris, Alida
2006-09-01
A major form of data analysis in clinical child and adolescent psychology is multiple regression. This article reviews issues in the application of such methods in light of the research designs typical of this field. Issues addressed include controlling covariates, evaluation of predictor relevance, comparing predictors, analysis of moderation, analysis of mediation, assumption violations, outliers, limited dependent variables, and directed regression and its relation to structural equation modeling. Analytic guidelines are provided within each domain.
Multiple Regression Analyses in Clinical Child and Adolescent Psychology
ERIC Educational Resources Information Center
Jaccard, James; Guilamo-Ramos, Vincent; Johansson, Margaret; Bouris, Alida
2006-01-01
A major form of data analysis in clinical child and adolescent psychology is multiple regression. This article reviews issues in the application of such methods in light of the research designs typical of this field. Issues addressed include controlling covariates, evaluation of predictor relevance, comparing predictors, analysis of moderation,…
1991-03-01
Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that
Comparison of the Properties of Regression and Categorical Risk-Adjustment Models
Averill, Richard F.; Muldoon, John H.; Hughes, John S.
2016-01-01
Clinical risk-adjustment, the ability to standardize the comparison of individuals with different health needs, is based upon 2 main alternative approaches: regression models and clinical categorical models. In this article, we examine the impact of the differences in the way these models are constructed on end user applications. PMID:26945302
ERIC Educational Resources Information Center
Olejnik, Stephen; Mills, Jamie; Keselman, Harvey
2000-01-01
Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…
Adjusting for Cell Type Composition in DNA Methylation Data Using a Regression-Based Approach.
Jones, Meaghan J; Islam, Sumaiya A; Edgar, Rachel D; Kobor, Michael S
2017-01-01
Analysis of DNA methylation in a population context has the potential to uncover novel gene and environment interactions as well as markers of health and disease. In order to find such associations it is important to control for factors which may mask or alter DNA methylation signatures. Since tissue of origin and coinciding cell type composition are major contributors to DNA methylation patterns, and can easily confound important findings, it is vital to adjust DNA methylation data for such differences across individuals. Here we describe the use of a regression method to adjust for cell type composition in DNA methylation data. We specifically discuss what information is required to adjust for cell type composition and then provide detailed instructions on how to perform cell type adjustment on high dimensional DNA methylation data. This method has been applied mainly to Illumina 450K data, but can also be adapted to pyrosequencing or genome-wide bisulfite sequencing data.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2016-04-19
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.
Lorenz, Normann
2016-06-13
In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd.
Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.
2016-01-01
Abstract The aim of this study was to verify the power of VO2max, peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO2max and PTV; 2) a constant submaximal run at 12 km·h−1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO2max, PTV and RE) and adjusted variables (VO2max0.72, PTV0.72 and RE0.60) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO2max. Significant correlations (p < 0.01) were found between 10 km running time and adjusted and unadjusted RE and PTV, providing models with effect size > 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV0.72 and RE0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. PMID:28149382
Thomas, Laine; Stefanski, Leonard A.; Davidian, Marie
2013-01-01
In clinical studies, covariates are often measured with error due to biological fluctuations, device error and other sources. Summary statistics and regression models that are based on mismeasured data will differ from the corresponding analysis based on the “true” covariate. Statistical analysis can be adjusted for measurement error, however various methods exhibit a tradeo between convenience and performance. Moment Adjusted Imputation (MAI) is method for measurement error in a scalar latent variable that is easy to implement and performs well in a variety of settings. In practice, multiple covariates may be similarly influenced by biological fluctuastions, inducing correlated multivariate measurement error. The extension of MAI to the setting of multivariate latent variables involves unique challenges. Alternative strategies are described, including a computationally feasible option that is shown to perform well. PMID:24072947
Algamal, Zakariya Yahya; Lee, Muhammad Hisyam
2015-12-01
Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.
Zhang, Y J; Xue, F X; Bai, Z P
2017-03-06
The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.
New ventures require accurate risk analyses and adjustments.
Eastaugh, S R
2000-01-01
For new business ventures to succeed, healthcare executives need to conduct robust risk analyses and develop new approaches to balance risk and return. Risk analysis involves examination of objective risks and harder-to-quantify subjective risks. Mathematical principles applied to investment portfolios also can be applied to a portfolio of departments or strategic business units within an organization. The ideal business investment would have a high expected return and a low standard deviation. Nonetheless, both conservative and speculative strategies should be considered in determining an organization's optimal service line and helping the organization manage risk.
Kleinman, Lawrence C; Norton, Edward C
2009-01-01
Objective To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common. Study Design Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR. Data Collection Data sets produced using Monte Carlo simulations. Principal Findings Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases. Conclusions Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large. PMID:18793213
Huo, Yuankai; Aboud, Katherine; Kang, Hakmook; Cutting, Laurie E; Landman, Bennett A
2016-10-01
Understanding brain volumetry is essential to understand neurodevelopment and disease. Historically, age-related changes have been studied in detail for specific age ranges (e.g., early childhood, teen, young adults, elderly, etc.) or more sparsely sampled for wider considerations of lifetime aging. Recent advancements in data sharing and robust processing have made available considerable quantities of brain images from normal, healthy volunteers. However, existing analysis approaches have had difficulty addressing (1) complex volumetric developments on the large cohort across the life time (e.g., beyond cubic age trends), (2) accounting for confound effects, and (3) maintaining an analysis framework consistent with the general linear model (GLM) approach pervasive in neuroscience. To address these challenges, we propose to use covariate-adjusted restricted cubic spline (C-RCS) regression within a multi-site cross-sectional framework. This model allows for flexible consideration of non-linear age-associated patterns while accounting for traditional covariates and interaction effects. As a demonstration of this approach on lifetime brain aging, we derive normative volumetric trajectories and 95% confidence intervals from 5111 healthy patients from 64 sites while accounting for confounding sex, intracranial volume and field strength effects. The volumetric results are shown to be consistent with traditional studies that have explored more limited age ranges using single-site analyses. This work represents the first integration of C-RCS with neuroimaging and the derivation of structural covariance networks (SCNs) from a large study of multi-site, cross-sectional data.
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses
Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.
2017-01-01
Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude
Nickerson, Lisa D; Smith, Stephen M; Öngür, Döst; Beckmann, Christian F
2017-01-01
Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or "shape") as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects
Berndt, Andrea E; Williams, Priscilla C
2013-01-01
This article reviews the life course perspective and considers various life course hypotheses such as trajectories, transitions, critical periods, sequencing, duration, and cumulative effects. Hierarchical regression and structural equation modeling are suggested as analyses to use in life course research. Secondary analysis was performed on the Early Head Start Research and Evaluation Study, 1996-2010, to illustrate their strengths and challenges. Models investigated the influence of mother and infant characteristics and of parent-child dysfunction at 14 and 24 months to children's cognitive outcomes at 36 months. Findings were interpreted and discussed in the context of life course hypotheses.
Direkvand-Moghadam, Ashraf; Suhrabi, Zainab; Akbari, Malihe
2016-01-01
Background Female sexual dysfunction, which can occur during any stage of a normal sexual activity, is a serious condition for individuals and couples. The present study aimed to determine the prevalence and predictive factors of female sexual dysfunction in women referred to health centers in Ilam, the Western Iran, in 2014. Methods In the present cross-sectional study, 444 women who attended health centers in Ilam were enrolled from May to September 2014. Participants were selected according to the simple random sampling method. Univariate and multivariate logistic regression analyses were used to predict the risk factors of female sexual dysfunction. Diffe rences with an alpha error of 0.05 were regarded as statistically significant. Results Overall, 75.9% of the study population exhibited sexual dysfunction. Univariate logistic regression analysis demonstrated that there was a significant association between female sexual dysfunction and age, menarche age, gravidity, parity, and education (P<0.05). Multivariate logistic regression analysis indicated that, menarche age (odds ratio, 1.26), education level (odds ratio, 1.71), and gravida (odds ratio, 1.59) were independent predictive variables for female sexual dysfunction. Conclusion The majority of Iranian women suffer from sexual dysfunction. A lack of awareness of Iranian women's sexual pleasure and formal training on sexual function and its influencing factors, such as menarche age, gravida, and level of education, may lead to a high prevalence of female sexual dysfunction. PMID:27688863
Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul
2011-09-01
We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data.
HE, PENG; ERIKSSON, FRANK; SCHEIKE, THOMAS H.; ZHANG, MEI-JIE
2015-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research (CIBMTR). Here cancer relapse and death in complete remission are two competing risks. PMID:27034534
Barks, C.S.
1995-01-01
Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of
Mamouridis, Valeria; Klein, Nadja; Kneib, Thomas; Cadarso Suarez, Carmen; Maynou, Francesc
2017-01-01
We analysed the landings per unit effort (LPUE) from the Barcelona trawl fleet targeting the red shrimp (Aristeus antennatus) using novel Bayesian structured additive distributional regression to gain a better understanding of the dynamics and determinants of variation in LPUE. The data set, covering a time span of 17 years, includes fleet-dependent variables (e.g. the number of trips performed by vessels), temporal variables (inter- and intra-annual variability) and environmental variables (the North Atlantic Oscillation index). Based on structured additive distributional regression, we evaluate (i) the gain in replacing purely linear predictors by additive predictors including nonlinear effects of continuous covariates, (ii) the inclusion of vessel-specific effects based on either fixed or random effects, (iii) different types of distributions for the response, and (iv) the potential gain in not only modelling the location but also the scale/shape parameter of these distributions. Our findings support that flexible model variants are indeed able to improve the fit considerably and that additional insights can be gained. Tools to select within several model specifications and assumptions are discussed in detail as well.
Leushuis, Esther; Wetzels, Alex; van der Steeg, Jan Willem; Steures, Pieternel; Bossuyt, Patrick M.M.; van Trooyen, Netty; Repping, Sjoerd; van der Horst, Frans A.L.; Hompes, Peter G.A. Hompes; Mol, Ben Willem J.; van der Veen, Fulco
2016-01-01
Background Standardization of the semen analysis may improve reproducibility. We assessed variability between laboratories in semen analyses and evaluated whether a transformation using Z scores and regression statistics was able to reduce this variability. Materials and Methods We performed a retrospective cohort study. We calculated between-laboratory coefficients of variation (CVB) for sperm concentration and for morphology. Subsequently, we standardized the semen analysis results by calculating laboratory specific Z scores, and by using regression. We used analysis of variance for four semen parameters to assess systematic differences between laboratories before and after the transformations, both in the circulation samples and in the samples obtained in the prospective cohort study in the Netherlands between January 2002 and February 2004. Results The mean CVBwas 7% for sperm concentration (range 3 to 13%) and 32% for sperm morphology (range 18 to 51%). The differences between the laboratories were statistically significant for all semen parameters (all P<0.001). Standardization using Z scores did not reduce the differences in semen analysis results between the laboratories (all P<0.001). Conclusion There exists large between-laboratory variability for sperm morphology and small, but statistically significant, between-laboratory variation for sperm concentration. Standardization using Z scores does not eliminate between-laboratory variability. PMID:26985342
Performance Evaluation of Button Bits in Coal Measure Rocks by Using Multiple Regression Analyses
NASA Astrophysics Data System (ADS)
Su, Okan
2016-02-01
Electro-hydraulic and jumbo drills are commonly used for underground coal mines and tunnel drives for the purpose of blasthole drilling and rock bolt installations. Not only machine parameters but also environmental conditions have significant effects on drilling. This study characterizes the performance of button bits during blasthole drilling in coal measure rocks by using multiple regression analyses. The penetration rate of jumbo and electro-hydraulic drills was measured in the field by employing bits in different diameters and the specific energy of the drilling was calculated at various locations, including highway tunnels and underground roadways of coal mines. Large block samples were collected from each location at which in situ drilling measurements were performed. Then, the effects of rock properties and machine parameters on the drilling performance were examined. Multiple regression models were developed for the prediction of the specific energy of the drilling and the penetration rate. The results revealed that hole area, impact (blow) energy, blows per minute of the piston within the drill, and some rock properties, such as the uniaxial compressive strength (UCS) and the drilling rate index (DRI), influence the drill performance.
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.
Stratton, Kelly G; Cook, Andrea J; Jackson, Lisa A; Nelson, Jennifer C
2015-03-30
Sequential methods are well established for randomized clinical trials (RCTs), and their use in observational settings has increased with the development of national vaccine and drug safety surveillance systems that monitor large healthcare databases. Observational safety monitoring requires that sequential testing methods be better equipped to incorporate confounder adjustment and accommodate rare adverse events. New methods designed specifically for observational surveillance include a group sequential likelihood ratio test that uses exposure matching and generalized estimating equations approach that involves regression adjustment. However, little is known about the statistical performance of these methods or how they compare to RCT methods in both observational and rare outcome settings. We conducted a simulation study to determine the type I error, power and time-to-surveillance-end of group sequential likelihood ratio test, generalized estimating equations and RCT methods that construct group sequential Lan-DeMets boundaries using data from a matched (group sequential Lan-DeMets-matching) or unmatched regression (group sequential Lan-DeMets-regression) setting. We also compared the methods using data from a multisite vaccine safety study. All methods had acceptable type I error, but regression methods were more powerful, faster at detecting true safety signals and less prone to implementation difficulties with rare events than exposure matching methods. Method performance also depended on the distribution of information and extent of confounding by site. Our results suggest that choice of sequential method, especially the confounder control strategy, is critical in rare event observational settings. These findings provide guidance for choosing methods in this context and, in particular, suggest caution when conducting exposure matching.
Wang, Xiaoli; Wu, Shuangsheng; MacIntyre, C. Raina; Zhang, Hongbin; Shi, Weixian; Peng, Xiaomin; Duan, Wei; Yang, Peng; Zhang, Yi; Wang, Quanyi
2015-01-01
Serfling-type periodic regression models have been widely used to identify and analyse epidemic of influenza. In these approaches, the baseline is traditionally determined using cleaned historical non-epidemic data. However, we found that the previous exclusion of epidemic seasons was empirical, since year-year variations in the seasonal pattern of activity had been ignored. Therefore, excluding fixed ‘epidemic’ months did not seem reasonable. We made some adjustments in the rule of epidemic-period removal to avoid potentially subjective definition of the start and end of epidemic periods. We fitted the baseline iteratively. Firstly, we established a Serfling regression model based on the actual observations without any removals. After that, instead of manually excluding a predefined ‘epidemic’ period (the traditional method), we excluded observations which exceeded a calculated boundary. We then established Serfling regression once more using the cleaned data and excluded observations which exceeded a calculated boundary. We repeated this process until the R2 value stopped to increase. In addition, the definitions of the onset of influenza epidemic were heterogeneous, which might make it impossible to accurately evaluate the performance of alternative approaches. We then used this modified model to detect the peak timing of influenza instead of the onset of epidemic and compared this model with traditional Serfling models using observed weekly case counts of influenza-like illness (ILIs), in terms of sensitivity, specificity and lead time. A better performance was observed. In summary, we provide an adjusted Serfling model which may have improved performance over traditional models in early warning at arrival of peak timing of influenza. PMID:25756205
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment.
ERIC Educational Resources Information Center
Wu, Dane W.
2002-01-01
The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…
ERIC Educational Resources Information Center
Li, Spencer D.
2011-01-01
Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…
ERIC Educational Resources Information Center
Tay, Louis; Drasgow, Fritz
2012-01-01
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Multiple regression analyses in artificial-grammar learning: the importance of control groups.
Lotz, Anja; Kinder, Annette; Lachnit, Harald
2009-03-01
In artificial-grammar learning, it is crucial to ensure that above-chance performance in the test stage is due to learning in the training stage but not due to judgemental biases. Here we argue that multiple regression analysis can be successfully combined with the use of control groups to assess whether participants were able to transfer knowledge acquired during training when making judgements about test stimuli. We compared the regression weights of judgements in a transfer condition (training and test strings were constructed by the same grammar but with different letters) with those in a control condition. Predictors were identical in both conditions-judgements of control participants were treated as if they were based on knowledge gained in a standard training stage. The results of this experiment as well as reanalyses of a former study support the usefulness of our approach.
Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression
Boukal, David S.; Ditrich, Tomáš; Kutcherov, Dmitry; Sroka, Pavel; Dudová, Pavla; Papáček, Miroslav
2015-01-01
Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed ‘developmental rate isomorphy’ (DRI) in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of ‘equiproportional development’ developed for copepods. PMID:26114859
Accommodating Linkage Disequilibrium in Genetic-Association Analyses via Ridge Regression
Malo, Nathalie; Libiger, Ondrej; Schork, Nicholas J.
2008-01-01
Large-scale genetic-association studies that take advantage of an extremely dense set of genetic markers have begun to produce very compelling statistical associations between multiple makers exhibiting strong linkage disequilibrium (LD) in a single genomic region and a phenotype of interest. However, the ultimate biological or “functional” significance of these multiple associations has been difficult to discern. In fact, the LD relationships between not only the markers found to be associated with the phenotype but also potential functionally or causally relevant genetic variations that reside near those markers have been exploited in such studies. Unfortunately, LD, especially strong LD, between variations at neighboring loci can make it difficult to distinguish the functionally relevant variations from nonfunctional variations. Although there are (rare) situations in which it is impossible to determine the independent phenotypic effects of variations in LD, there are strategies for accommodating LD between variations at different loci, and they can be used to tease out their independent effects on a phenotype. These strategies make it possible to differentiate potentially causative from noncausative variations. We describe one such approach involving ridge regression. We showcase the method by using both simulated and real data. Our results suggest that ridge regression and related techniques have the potential to distinguish causative from noncausative variations in association studies. PMID:18252218
Gsteiger, S; Bretz, F; Liu, W
2011-07-01
Many applications in biostatistics rely on nonlinear regression models, such as, for example, population pharmacokinetic and pharmacodynamic modeling, or modeling approaches for dose-response characterization and dose selection. Such models are often expressed as nonlinear mixed-effects models, which are implemented in all major statistical software packages. Inference on the model curve can be based on the estimated parameters, from which pointwise confidence intervals for the mean profile at any single point in the covariate region (time, dose, etc.) can be derived. These pointwise confidence intervals, however, should not be used for simultaneous inferences beyond that single covariate value. If assessment over the entire covariate region is required, the joint coverage probability by using the combined pointwise confidence intervals is likely to be less than the nominal coverage probability. In this paper we consider simultaneous confidence bands for the mean profile over the covariate region of interest and propose two large-sample methods for their construction. The first method is based on the Schwarz inequality and an asymptotic χ(2) distribution. The second method relies on simulating from a multivariate normal distribution. We illustrate the methods with the pharmacokinetics of theophylline. In addition, we report the results of an extensive simulation study to investigate the operating characteristics of the two construction methods. Finally, we present extensions to construct simultaneous confidence bands for the difference of two models and to assess equivalence between two models in biosimilarity applications.
B Gadžurić, Slobodan; O Podunavac Kuzmanović, Sanja; B Vraneš, Milan; Petrin, Marija; Bugarski, Tatjana; Kovačević, Strahinja Z
2016-01-01
The purpose of this work is to promote and facilitate forensic profiling and chemical analysis of illicit drug samples in order to determine their origin, methods of production and transfer through the country. The article is based on the gas chromatography analysis of heroin samples seized from three different locations in Serbia. Chemometric approach with appropriate statistical tools (multiple-linear regression (MLR), hierarchical cluster analysis (HCA) and Wald-Wolfowitz run (WWR) test) were applied on chromatographic data of heroin samples in order to correlate and examine the geographic origin of seized heroin samples. The best MLR models were further validated by leave-one-out technique as well as by the calculation of basic statistical parameters for the established models. To confirm the predictive power of the models, external set of heroin samples was used. High agreement between experimental and predicted values of acetyl thebaol and diacetyl morphine peak ratio, obtained in the validation procedure, indicated the good quality of derived MLR models. WWR test showed which examined heroin samples come from the same population, and HCA was applied in order to overview the similarities among the studied heroine samples.
B. Gadžurić, Slobodan; O. Podunavac Kuzmanović, Sanja; B. Vraneš, Milan; Petrin, Marija; Bugarski, Tatjana; Kovačević, Strahinja Z.
2016-01-01
The purpose of this work is to promote and facilitate forensic profiling and chemical analysis of illicit drug samples in order to determine their origin, methods of production and transfer through the country. The article is based on the gas chromatography analysis of heroin samples seized from three different locations in Serbia. Chemometric approach with appropriate statistical tools (multiple-linear regression (MLR), hierarchical cluster analysis (HCA) and Wald-Wolfowitz run (WWR) test) were applied on chromatographic data of heroin samples in order to correlate and examine the geographic origin of seized heroin samples. The best MLR models were further validated by leave-one-out technique as well as by the calculation of basic statistical parameters for the established models. To confirm the predictive power of the models, external set of heroin samples was used. High agreement between experimental and predicted values of acetyl thebaol and diacetyl morphine peak ratio, obtained in the validation procedure, indicated the good quality of derived MLR models. WWR test showed which examined heroin samples come from the same population, and HCA was applied in order to overview the similarities among the studied heroine samples. PMID:28243268
Gewandter, Jennifer S; Smith, Shannon M; McKeown, Andrew; Burke, Laurie B; Hertz, Sharon H; Hunsinger, Matthew; Katz, Nathaniel P; Lin, Allison H; McDermott, Michael P; Rappaport, Bob A; Williams, Mark R; Turk, Dennis C; Dworkin, Robert H
2014-03-01
Performing multiple analyses in clinical trials can inflate the probability of a type I error, or the chance of falsely concluding a significant effect of the treatment. Strategies to minimize type I error probability include prespecification of primary analyses and statistical adjustment for multiple comparisons, when applicable. The objective of this study was to assess the quality of primary analysis reporting and frequency of multiplicity adjustment in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and PAIN®). A total of 161 randomized controlled trials investigating noninvasive pharmacological treatments or interventional treatments for pain, published between 2006 and 2012, were included. Only 52% of trials identified a primary analysis, and only 10% of trials reported prespecification of that analysis. Among the 33 articles that identified a primary analysis with multiple testing, 15 (45%) adjusted for multiplicity; of those 15, only 2 (13%) reported prespecification of the adjustment methodology. Trials in clinical pain conditions and industry-sponsored trials identified a primary analysis more often than trials in experimental pain models and non-industry-sponsored trials, respectively. The results of this systematic review demonstrate deficiencies in the reporting and possibly the execution of primary analyses in published analgesic trials. These deficiencies can be rectified by changes in, or better enforcement of, journal policies pertaining to requirements for the reporting of analyses of clinical trial data.
Adjusted Analyses in Studies Addressing Therapy and Harm: Users' Guides to the Medical Literature.
Agoritsas, Thomas; Merglen, Arnaud; Shah, Nilay D; O'Donnell, Martin; Guyatt, Gordon H
2017-02-21
Observational studies almost always have bias because prognostic factors are unequally distributed between patients exposed or not exposed to an intervention. The standard approach to dealing with this problem is adjusted or stratified analysis. Its principle is to use measurement of risk factors to create prognostically homogeneous groups and to combine effect estimates across groups.The purpose of this Users' Guide is to introduce readers to fundamental concepts underlying adjustment as a way of dealing with prognostic imbalance and to the basic principles and relative trustworthiness of various adjustment strategies.One alternative to the standard approach is propensity analysis, in which groups are matched according to the likelihood of membership in exposed or unexposed groups. Propensity methods can deal with multiple prognostic factors, even if there are relatively few patients having outcome events. However, propensity methods do not address other limitations of traditional adjustment: investigators may not have measured all relevant prognostic factors (or not accurately), and unknown factors may bias the results.A second approach, instrumental variable analysis, relies on identifying a variable associated with the likelihood of receiving the intervention but not associated with any prognostic factor or with the outcome (other than through the intervention); this could mimic randomization. However, as with assumptions of other adjustment approaches, it is never certain if an instrumental variable analysis eliminates bias.Although all these approaches can reduce the risk of bias in observational studies, none replace the balance of both known and unknown prognostic factors offered by randomization.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
NASA Astrophysics Data System (ADS)
Wang, Tijian; Lam, K. S.; Tsang, C. W.; Kot, S. C.
2004-02-01
This paper investigates, the variability and correlation of surface ozone (O3) and carbon monoxide (CO) observed at Cape D’Aguilar in Hong Kong from 1 January 1994 to 31 December 1995. Statistical analysis shows that the average O3 and CO mixing ratios during the two years are 32±17 ppbv and 305±191 ppbv, respectively. The O3/CO ratio ranges from 0.05 to 0.6 ppbv/ppbv with its frequency peaking at 0.15. The raw dataset is divided into six groups using backward trajectory and cluster analyses. For data assigned to the same trajectory type, three groups are further sorted out based on CO and NO x mixing ratios. The correlation coefficients and slopes of O3/CO for the 18 groups are calculated using linear regression analysis. Finally, five kinds of air masses with different chemical features are identified: continental background (CB), marine background (MB), regional polluted continental (RPC), perturbed marine (P*M), and local polluted (LP) air masses. Further studies indicate that O3 and CO in the continental and marine background air masses (CB and MB) are positively correlated for the reason that they are well mixed over the long range transport before arriving at the site. The negative correlation between O3 and CO in air mass LP is believed to be associated with heavy anthropogenic influence, which results from the enhancement by local sources as indicated by high CO and NO x and depletion of O3 when mixed with fresh emissions. The positive correlation in the perturbed marine air mass P*M favors the low photochemical production of O3. The negative correlation found in the regional polluted continental air mass RPC is different from the observations at Oki Island in Japan due to the more complex O3 chemistry at Cape D’Aguilar.
Agogo, George O
2017-01-01
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method.
Kjelstrom, L.C.
1995-01-01
Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed
ERIC Educational Resources Information Center
Weinberger, Daniel A.
1997-01-01
Confirmatory factor analyses were used to study whether the structure of Weinberger Adjustment Inventory subscales would be comparable across clinical patient and nonclinical samples of youth, young adults, and adults (six samples, 1,486 subjects). Results suggest little need to use different measures of general adjustment when studying children…
ERIC Educational Resources Information Center
Larzelere, Robert E.; Ferrer, Emilio; Kuhn, Brett R.; Danelia, Ketevan
2010-01-01
This study estimates the causal effects of six corrective actions for children's problem behaviors, comparing four types of longitudinal analyses that correct for pre-existing differences in a cohort of 1,464 4- and 5-year-olds from Canadian National Longitudinal Survey of Children and Youth (NLSCY) data. Analyses of residualized gain scores found…
Fujita, A; Takabatake, H; Tagaki, S; Sohda, T; Sekine, K
1996-03-01
To evaluate the effect of chemotherapy on QOL, the survival period was categorized by 3 intervals: one in the hospital for chemotherapy (TOX), on an outpatient basis (TWiST Time without Symptom and Toxicity), and in the hospital for conservative therapy (REL). Coefficients showing the QOL level were expressed as ut, uw and ur. If uw was 1 and ut and ur were plotted at less than 1, ut TOX+uwTWiST+urREL could be a quality-adjusted value relative to TWiST (Q-TWiST). One hundred five patients with stage IV non-small cell lung cancer were included. Sixty-five were given chemotherapy, and the other 40 were not. The observation period was 2 years. Q-TWiST values for age, sex, PS, histology and chemotherapy were calculated. Their quantification was performed employing a regression tree type method. Chemotherapy contributed to Q-TWiST when ut approached 1 i.e., no side effect was supposed). When ut was less than 0.5, PS and sex had an appreciable role.
Brown, C. Erwin
1993-01-01
Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.
Suvak, Michael K; Walling, Sherry M; Iverson, Katherine M; Taft, Casey T; Resick, Patricia A
2009-12-01
Multilevel modeling is a powerful and flexible framework for analyzing nested data structures (e.g., repeated measures or longitudinal designs). The authors illustrate a series of multilevel regression procedures that can be used to elucidate the nature of the relationship between two variables across time. The goal is to help trauma researchers become more aware of the utility of multilevel modeling as a tool for increasing the field's understanding of posttraumatic adaptation. These procedures are demonstrated by examining the relationship between two posttraumatic symptoms, intrusion and avoidance, across five assessment points in a sample of rape and robbery survivors (n = 286). Results revealed that changes in intrusion were highly correlated with changes in avoidance over the 18-month posttrauma period.
2016-01-01
We estimate models of consumer food waste awareness and attitudes using responses from a national survey of U.S. residents. Our models are interpreted through the lens of several theories that describe how pro-social behaviors relate to awareness, attitudes and opinions. Our analysis of patterns among respondents’ food waste attitudes yields a model with three principal components: one that represents perceived practical benefits households may lose if food waste were reduced, one that represents the guilt associated with food waste, and one that represents whether households feel they could be doing more to reduce food waste. We find our respondents express significant agreement that some perceived practical benefits are ascribed to throwing away uneaten food, e.g., nearly 70% of respondents agree that throwing away food after the package date has passed reduces the odds of foodborne illness, while nearly 60% agree that some food waste is necessary to ensure meals taste fresh. We identify that these attitudinal responses significantly load onto a single principal component that may represent a key attitudinal construct useful for policy guidance. Further, multivariate regression analysis reveals a significant positive association between the strength of this component and household income, suggesting that higher income households most strongly agree with statements that link throwing away uneaten food to perceived private benefits. PMID:27441687
de Vet, Emely; Chinapaw, Mai JM; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes
2014-01-01
Background Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games—active games—seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. Objective The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. Methods A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Results Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; P<.001), a less positive attitude toward non-active games (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; P<.001) and friends (OR 3.4, CI 1.4-8.4; P=.009) who spend more time on active gaming and a little bit lower score on game engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P<.001), having friends who spend more time on non-active gaming (OR 3.3, CI 1.46-7.53; P=.004), and a more positive image of a non-active gamer (OR 2, CI 1.07–3.75; P=.03). Conclusions Various factors were significantly associated with active gaming ≥1 h/wk and non-active gaming >7 h/wk. Active gaming is most
Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter
2015-01-01
In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468
Laubender, Ruediger P; Bender, Ralf
2014-02-28
Recently, Laubender and Bender (Stat. Med. 2010; 29: 851-859) applied the average risk difference (RD) approach to estimate adjusted RD and corresponding number needed to treat measures in the Cox proportional hazards model. We calculated standard errors and confidence intervals by using bootstrap techniques. In this paper, we develop asymptotic variance estimates of the adjusted RD measures and corresponding asymptotic confidence intervals within the counting process theory and evaluated them in a simulation study. We illustrate the use of the asymptotic confidence intervals by means of data of the Düsseldorf Obesity Mortality Study.
Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O
2017-03-05
The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (Kb), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated Kb and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10(-7)M for anthracene and 3.48×10(-8)M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized
NASA Astrophysics Data System (ADS)
Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O.
2017-03-01
The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (Kb), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated Kb and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81 × 10- 7 M for anthracene and 3.48 × 10- 8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a
NASA Astrophysics Data System (ADS)
Liberman, Neomi; Ben-David Kolikant, Yifat; Beeri, Catriel
2012-09-01
Due to a program reform in Israel, experienced CS high-school teachers faced the need to master and teach a new programming paradigm. This situation served as an opportunity to explore the relationship between teachers' content knowledge (CK) and their pedagogical content knowledge (PCK). This article focuses on three case studies, with emphasis on one of them. Using observations and interviews, we examine how the teachers, we observed taught and what development of their teaching occurred as a result of their teaching experience, if at all. Our findings suggest that this situation creates a new hybrid state of teachers, which we term "regressed experts." These teachers incorporate in their professional practice some elements typical of novices and some typical of experts. We also found that these teachers' experience, although established when teaching a different CK, serve as a leverage to improve their knowledge and understanding of aspects of the new content.
Wang, Tien-ni; Wu, Ching-yi; Chen, Chia-ling; Shieh, Jeng-yi; Lu, Lu; Lin, Keh-chung
2013-03-01
Given the growing evidence for the effects of constraint-induced therapy (CIT) in children with cerebral palsy (CP), there is a need for investigating the characteristics of potential participants who may benefit most from this intervention. This study aimed to establish predictive models for the effects of pediatric CIT on motor and functional outcomes. Therapists administered CIT to 49 children (aged 3-11 years) with CP. Sessions were 1-3.5h a day, twice a week, for 3-4 weeks. Parents were asked to document the number of restraint hours outside of the therapy sessions. Domains of treatment outcomes included motor capacity (measured by the Peabody Developmental Motor Scales II), motor performance (measured by the Pediatric Motor Activity Log), and functional independence (measured by the Pediatric Functional Independence Measure). Potential predictors included age, affected side, compliance (measured by time of restraint), and the initial level of motor impairment severity. Tests were administered before, immediately after, and 3 months after the intervention. Logistic regression analyses showed that total amount of restraint time was the only significant predictor for improved motor capacity immediately after CIT. Younger children who restrained the less affected arm for a longer time had a greater chance to achieve clinically significant improvements in motor performance. For outcomes of functional independence in daily life, younger age was associated with clinically meaningful improvement in the self-care domain. Baseline motor abilities were significantly predictive of better improvement in mobility and cognition. Significant predictors varied according to the aspects of motor outcomes after 3 months of follow-up. The potential predictors identified in this study allow clinicians to target those children who may benefit most from CIT.
ERIC Educational Resources Information Center
Ashworth, Kristen E.; Pullen, Paige C.
2015-01-01
The purpose of this study was to compare the results of a regression discontinuity design (RDD) with those of an experimental design of a tiered vocabulary intervention for children at risk for reading disability to determine RDD's feasibility as a research methodology for this type of study. Researchers reanalyzed an archival dataset of a…
Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A
2015-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
Dubovyk, Olena; Menz, Gunter; Conrad, Christopher; Kan, Elena; Machwitz, Miriam; Khamzina, Asia
2013-06-01
Advancing land degradation in the irrigated areas of Central Asia hinders sustainable development of this predominantly agricultural region. To support decisions on mitigating cropland degradation, this study combines linear trend analysis and spatial logistic regression modeling to expose a land degradation trend in the Khorezm region, Uzbekistan, and to analyze the causes. Time series of the 250-m MODIS NDVI, summed over the growing seasons of 2000-2010, were used to derive areas with an apparent negative vegetation trend; this was interpreted as an indicator of land degradation. About one third (161,000 ha) of the region's area experienced negative trends of different magnitude. The vegetation decline was particularly evident on the low-fertility lands bordering on the natural sandy desert, suggesting that these areas should be prioritized in mitigation planning. The results of logistic modeling indicate that the spatial pattern of the observed trend is mainly associated with the level of the groundwater table (odds = 330 %), land-use intensity (odds = 103 %), low soil quality (odds = 49 %), slope (odds = 29 %), and salinity of the groundwater (odds = 26 %). Areas, threatened by land degradation, were mapped by fitting the estimated model parameters to available data. The elaborated approach, combining remote-sensing and GIS, can form the basis for developing a common tool for monitoring land degradation trends in irrigated croplands of Central Asia.
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
The logistic regression originally is intended to explain the relationship between the probability of an event and a set of covariables. The model's coefficients can be interpreted via the odds and odds ratio, which are presented in introduction of the chapter. The observations are possibly got individually, then we speak of binary logistic regression. When they are grouped, the logistic regression is said binomial. In our presentation we mainly focus on the binary case. For statistical inference the main tool is the maximum likelihood methodology: we present the Wald, Rao and likelihoods ratio results and their use to compare nested models. The problems we intend to deal with are essentially the same as in multiple linear regression: testing global effect, individual effect, selection of variables to build a model, measure of the fitness of the model, prediction of new values… . The methods are demonstrated on data sets using R. Finally we briefly consider the binomial case and the situation where we are interested in several events, that is the polytomous (multinomial) logistic regression and the particular case of ordinal logistic regression.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
NASA Astrophysics Data System (ADS)
Agustí-Panareda, Anna; Massart, Sébastien; Chevallier, Frédéric; Balsamo, Gianpaolo; Boussetta, Souhail; Dutra, Emanuel; Beljaars, Anton
2016-08-01
Forecasting atmospheric CO2 daily at the global scale with a good accuracy like it is done for the weather is a challenging task. However, it is also one of the key areas of development to bridge the gaps between weather, air quality and climate models. The challenge stems from the fact that atmospheric CO2 is largely controlled by the CO2 fluxes at the surface, which are difficult to constrain with observations. In particular, the biogenic fluxes simulated by land surface models show skill in detecting synoptic and regional-scale disturbances up to sub-seasonal time-scales, but they are subject to large seasonal and annual budget errors at global scale, usually requiring a posteriori adjustment. This paper presents a scheme to diagnose and mitigate model errors associated with biogenic fluxes within an atmospheric CO2 forecasting system. The scheme is an adaptive scaling procedure referred to as a biogenic flux adjustment scheme (BFAS), and it can be applied automatically in real time throughout the forecast. The BFAS method generally improves the continental budget of CO2 fluxes in the model by combining information from three sources: (1) retrospective fluxes estimated by a global flux inversion system, (2) land-use information, (3) simulated fluxes from the model. The method is shown to produce enhanced skill in the daily CO2 10-day forecasts without requiring continuous manual intervention. Therefore, it is particularly suitable for near-real-time CO2 analysis and forecasting systems.
Leacy, Finbarr P; Floyd, Sian; Yates, Tom A; White, Ian R
2017-01-10
Multiple imputation with delta adjustment provides a flexible and transparent means to impute univariate missing data under general missing-not-at-random mechanisms. This facilitates the conduct of analyses assessing sensitivity to the missing-at-random (MAR) assumption. We review the delta-adjustment procedure and demonstrate how it can be used to assess sensitivity to departures from MAR, both when estimating the prevalence of a partially observed outcome and when performing parametric causal mediation analyses with a partially observed mediator. We illustrate the approach using data from 34,446 respondents to a tuberculosis and human immunodeficiency virus (HIV) prevalence survey that was conducted as part of the Zambia-South Africa TB and AIDS Reduction Study (2006-2010). In this study, information on partially observed HIV serological values was supplemented by additional information on self-reported HIV status. We present results from 2 types of sensitivity analysis: The first assumed that the degree of departure from MAR was the same for all individuals with missing HIV serological values; the second assumed that the degree of departure from MAR varied according to an individual's self-reported HIV status. Our analyses demonstrate that multiple imputation offers a principled approach by which to incorporate auxiliary information on self-reported HIV status into analyses based on partially observed HIV serological values.
Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate
2005-05-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.
Quatela, Angelica; Callister, Robin; Patterson, Amanda; MacDonald-Wicks, Lesley
2016-01-01
This systematic review investigated the effects of differing energy intakes, macronutrient compositions, and eating patterns of meals consumed after an overnight fast on Diet Induced Thermogenesis (DIT). The initial search identified 2482 records; 26 papers remained once duplicates were removed and inclusion criteria were applied. Studies (n = 27) in the analyses were randomized crossover designs comparing the effects of two or more eating events on DIT. Higher energy intake increased DIT; in a mixed model meta-regression, for every 100 kJ increase in energy intake, DIT increased by 1.1 kJ/h (p < 0.001). Meals with a high protein or carbohydrate content had a higher DIT than high fat, although this effect was not always significant. Meals with medium chain triglycerides had a significantly higher DIT than long chain triglycerides (meta-analysis, p = 0.002). Consuming the same meal as a single bolus eating event compared to multiple small meals or snacks was associated with a significantly higher DIT (meta-analysis, p = 0.02). Unclear or inconsistent findings were found by comparing the consumption of meals quickly or slowly, and palatability was not significantly associated with DIT. These findings indicate that the magnitude of the increase in DIT is influenced by the energy intake, macronutrient composition, and eating pattern of the meal. PMID:27792142
Quatela, Angelica; Callister, Robin; Patterson, Amanda; MacDonald-Wicks, Lesley
2016-10-25
This systematic review investigated the effects of differing energy intakes, macronutrient compositions, and eating patterns of meals consumed after an overnight fast on Diet Induced Thermogenesis (DIT). The initial search identified 2482 records; 26 papers remained once duplicates were removed and inclusion criteria were applied. Studies (n = 27) in the analyses were randomized crossover designs comparing the effects of two or more eating events on DIT. Higher energy intake increased DIT; in a mixed model meta-regression, for every 100 kJ increase in energy intake, DIT increased by 1.1 kJ/h (p < 0.001). Meals with a high protein or carbohydrate content had a higher DIT than high fat, although this effect was not always significant. Meals with medium chain triglycerides had a significantly higher DIT than long chain triglycerides (meta-analysis, p = 0.002). Consuming the same meal as a single bolus eating event compared to multiple small meals or snacks was associated with a significantly higher DIT (meta-analysis, p = 0.02). Unclear or inconsistent findings were found by comparing the consumption of meals quickly or slowly, and palatability was not significantly associated with DIT. These findings indicate that the magnitude of the increase in DIT is influenced by the energy intake, macronutrient composition, and eating pattern of the meal.
A regression model analysis of longitudinal dental caries data.
Ringelberg, M L; Tonascia, J A
1976-03-01
Longitudinal data on caries experience were derived from the reexamination and interview of a cohort of 306 subjects with an average follow-up period of 33 years after the baseline examination. Analysis of the data was accomplished by the use of contingency tables utilizing enumeration statistics compared with a multiple regression analysis. The analyses indicated a strong association of caries experience at one point in time with the caries experience of that same person earlier in life. The regression model approach offers adjustment of any given independent variable for the effect of all other independent variables, providing a powerful means of bias reduction. The model is also useful in separating out the specific effect of an independent variable over and above the contribution of other variables. The model used explained 35% of the variability in the DMFS scores recorded. Similar models could be useful adjuncts in the analyses of dental epidemiologic data.
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Disability and Coping as Predictors of Psychological Adjustment to Rheumatoid Arthritis.
ERIC Educational Resources Information Center
Revenson, Tracey A.; Felton, Barbara J.
1989-01-01
Examined degree to which self-reported functional disability and coping efforts contributed to psychological adjustment among 45 rheumatoid arthritis patients over six months. Hierarchical multiple regression analyses indicated that increases in disability were related to decreased acceptance of illness and increased negative affect, while coping…
Native American Racial Identity Development and College Adjustment at Two-Year Institutions
ERIC Educational Resources Information Center
Watson, Joshua C.
2009-01-01
In this study, a series of simultaneous multiple regression analyses were conducted to examine the relationship between racial identity development and college adjustment for a sample of 76 Choctaw community college students in the South. Results indicated that 3 of the 4 racial identity statuses (dissonance, immersion-emersion, and…
A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.
ERIC Educational Resources Information Center
Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.
Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…
Exploring Mexican American adolescent romantic relationship profiles and adjustment
Moosmann, Danyel A.V.; Roosa, Mark W.
2015-01-01
Although Mexican Americans are the largest ethnic minority group in the nation, knowledge is limited regarding this population's adolescent romantic relationships. This study explored whether 12th grade Mexican Americans’ (N = 218; 54% female) romantic relationship characteristics, cultural values, and gender created unique latent classes and if so, whether they were linked to adjustment. Latent class analyses suggested three profiles including, relatively speaking, higher, satisfactory, and lower quality romantic relationships. Regression analyses indicated these profiles had distinct associations with adjustment. Specifically, adolescents with higher and satisfactory quality romantic relationships reported greater future family expectations, higher self-esteem, and fewer externalizing symptoms than those with lower quality romantic relationships. Similarly, adolescents with higher quality romantic relationships reported greater academic self-efficacy and fewer sexual partners than those with lower quality romantic relationships. Overall, results suggested higher quality romantic relationships were most optimal for adjustment. Future research directions and implications are discussed. PMID:26141198
Rank regression: an alternative regression approach for data with outliers.
Chen, Tian; Tang, Wan; Lu, Ying; Tu, Xin
2014-10-01
Linear regression models are widely used in mental health and related health services research. However, the classic linear regression analysis assumes that the data are normally distributed, an assumption that is not met by the data obtained in many studies. One method of dealing with this problem is to use semi-parametric models, which do not require that the data be normally distributed. But semi-parametric models are quite sensitive to outlying observations, so the generated estimates are unreliable when study data includes outliers. In this situation, some researchers trim the extreme values prior to conducting the analysis, but the ad-hoc rules used for data trimming are based on subjective criteria so different methods of adjustment can yield different results. Rank regression provides a more objective approach to dealing with non-normal data that includes outliers. This paper uses simulated and real data to illustrate this useful regression approach for dealing with outliers and compares it to the results generated using classical regression models and semi-parametric regression models.
Bayesian Unimodal Density Regression for Causal Inference
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2011-01-01
Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…
... structural alignment and improve your body's physical function. Low back pain, neck pain and headache are the most common ... treated. Chiropractic adjustment can be effective in treating low back pain, although much of the research done shows only ...
... from other people Skipped heartbeats and other physical complaints Trembling or twitching To have adjustment disorder, you ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...
ERIC Educational Resources Information Center
Walton, Joseph M.; And Others
1978-01-01
Ridge regression is an approach to the problem of large standard errors of regression estimates of intercorrelated regressors. The effect of ridge regression on the estimated squared multiple correlation coefficient is discussed and illustrated. (JKS)
Na, Hyunjoo; Dancy, Barbara L; Park, Chang
2015-06-01
The study's purpose was to explore whether frequency of cyberbullying victimization, cognitive appraisals, and coping strategies were associated with psychological adjustments among college student cyberbullying victims. A convenience sample of 121 students completed questionnaires. Linear regression analyses found frequency of cyberbullying victimization, cognitive appraisals, and coping strategies respectively explained 30%, 30%, and 27% of the variance in depression, anxiety, and self-esteem. Frequency of cyberbullying victimization and approach and avoidance coping strategies were associated with psychological adjustments, with avoidance coping strategies being associated with all three psychological adjustments. Interventions should focus on teaching cyberbullying victims to not use avoidance coping strategies.
Life adjustment correlates of physical self-concepts.
Sonstroem, R J; Potts, S A
1996-05-01
This research tested relationships between physical self-concepts and contemporary measures of life adjustment. University students (119 females, 126 males) completed the Physical Self-Perception Profile assessing self-concepts of sport competence, physical condition, attractive body, strength, and general physical self-worth. Multiple regression found significant associations (P < 0.05 to P < 0.001) in hypothesized directions between physical self-concepts and positive affect, negative affect, depression, and health complaints in 17 of 20 analyses. Thirteen of these relationships remained significant when controlling for the Bonferroni effect. Hierarchical multiple regression examined the unique contribution of physical self-perceptions in predicting each adjustment variable after accounting for the effects of global self-esteem and two measures of social desirability. Physical self-concepts significantly improved associations with life adjustment (P < 0.05 to P < 0.05) in three of the eight analyses across gender and approached significance in three others. These data demonstrate that self-perceptions of physical competence in college students are essentially related to life adjustment, independent of the effects of social desirability and global self-esteem. These links are mainly with perceptions of sport competence in males and with perceptions of physical condition, attractive body, and general physical self-worth in both males and females.
Psychosocial adjustment to ALS: a longitudinal study.
Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea
2015-01-01
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.
Psychosocial adjustment to ALS: a longitudinal study
Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea
2015-01-01
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS. PMID:26441696
Harry, H.H.
1988-03-11
Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-05
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-01
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Regressive systemic sclerosis.
Black, C; Dieppe, P; Huskisson, T; Hart, F D
1986-01-01
Systemic sclerosis is a disease which usually progresses or reaches a plateau with persistence of symptoms and signs. Regression is extremely unusual. Four cases of established scleroderma are described in which regression is well documented. The significance of this observation and possible mechanisms of disease regression are discussed. Images PMID:3718012
Tharrington, Arnold N.
2015-09-09
The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé
2016-01-01
Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418
Zane, P A; Brindle, S D; Gause, D O; O'Buck, A J; Raghavan, P R; Tripp, S L
1990-09-01
The relationship between the physicochemical characteristics of 27 new drug candidates and their distribution into the melanin-containing structure of the rat eye, the uveal tract, was examined. Tissue distribution data were obtained from whole-body autoradiograms of pigmented Long-Evans rats sacrificed at 5 min and 96 hr after dosing. The physicochemical parameters considered include molecular weight, pKa, degree of ionization, octanol/water partition coefficient (log Po/w), drug-melanin binding energy, and acid/base status of the functional groups within the molecule. Multiple linear regression analysis was used to describe the best model correlating physicochemical and/or biological characteristics of these compounds to their initial distribution at 5 min and to the retention of residual radioactivity in ocular melanin at 96 hr post-injection. The early distribution was a function primarily of acid/base status, pKa, binding energy, and log P(o/w), whereas uveal tract retention in rats was a function of volume of distribution (V1), log P(o/w), pKa, and binding energy. Further, there was a relationship between the initial distribution of a compound into the uveal tract and its retention 96 hr later. More specifically, the structures most likely to be distributed and ultimately retained at high concentrations were those containing strongly basic functionalities, such as piperidine or piperazine moieties and other amines. Further, the more lipophilic and, hence, widely distributed the basic compound, the greater the likelihood that it interacts with ocular melanin. In summary, the use of multiple linear regression analysis was useful in distinguishing which physicochemical characteristics of a compound or group of compounds contributed to melanin binding in pigmented rats in vivo.
Regression Analysis: Legal Applications in Institutional Research
ERIC Educational Resources Information Center
Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.
2008-01-01
This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer.
Arruda Viani, Gustavo; Stefano, Eduardo Jose; Vendito Soares, Francisco; Afonso, Sergio Luis
2011-07-15
Purpose: To evaluate whether the risk of local recurrence depends on the biologic effective dose (BED) or fractionation dose in patients with resectable rectal cancer undergoing preoperative radiotherapy (RT) compared with surgery alone. Methods and Materials: A meta-analysis of randomized controlled trials (RCTs) was performed. The MEDLINE, Embase, CancerLit, and Cochrane Library databases were systematically searched for evidence. To evaluate the dose-response relationship, we conducted a meta-regression analysis. Four subgroups were created: Group 1, RCTs with a BED >30 Gy{sub 10} and a short RT schedule; Group 2, RCTs with BED >30 Gy{sub 10} and a long RT schedule; Group 3, RCTs with BED {<=}30 Gy{sub 10} and a short RT schedule; and Group 4, RCTs with BED {<=}30 Gy{sub 10} and a long RT schedule. Results: Our review identified 21 RCTs, yielding 9,097 patients. The pooled results from these 21 randomized trials of preoperative RT showed a significant reduction in mortality for groups 1 (p = .004) and 2 (p = .03). For local recurrence, the results were also significant in groups 1 (p = .00001) and 2 (p = .00001).The only subgroup that showed a greater sphincter preservation (SP) rate than surgery was group 2 (p = .03). The dose-response curve was linear (p = .006), and RT decreased the risk of local recurrence by about 1.7% for each Gy{sub 10} of BED. Conclusion: Our data have shown that RT with a BED of >30 Gy{sub 10} is more efficient in reducing local recurrence and mortality rates than a BED of {<=}30 Gy{sub 10}, independent of the schedule of fractionation used. A long RT schedule with a BED of >30 Gy{sub 10} should be recommended for sphincter preservation.
Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
Improved Regression Calibration
ERIC Educational Resources Information Center
Skrondal, Anders; Kuha, Jouni
2012-01-01
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…
Gerber, Samuel; Rübel, Oliver; Bremer, Peer-Timo; Pascucci, Valerio; Whitaker, Ross T.
2012-01-01
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduce a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse-Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this paper introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to over-fitting. The Morse-Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse-Smale regression. Supplementary materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse-Smale complex approximation and additional tables for the climate-simulation study. PMID:23687424
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context.
Agar-Wilson, M; Jackson, T
2012-01-01
Although emotion regulation capacities have been linked to adjustment among people with chronic pain, researchers have yet to determine whether these capacities are related to functioning independent of established facets of pain coping. The present study was designed to address this gap. A sample 128 Australian adults with chronic pain (44 men, 84 women) completed self-report measures of adjustment (quality of life, negative affect, and pain-related disability), pain coping, and features of emotion regulation (emotion appraisal, perceived efficacy in emotion regulation, emotion utilization). Hierarchical multiple regression analyses indicated that efficacy in emotion regulation was related to quality of life and reduced negative affect even after statistically controlling for effects of other measures of adjustment, pain coping efficacy, and pain coping. Conversely, features of emotion regulation did not improve the prediction model for pain-related disability. Findings suggest emotion regulation capacities may have a unique role in the prediction of specific facets of adjustment among people with chronic pain.
Schmid, Matthias; Wickler, Florian; Maloney, Kelly O.; Mitchell, Richard; Fenske, Nora; Mayr, Andreas
2013-01-01
Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures. PMID:23626706
Predictors of sociocultural adjustment among sojourning Malaysian students in Britain.
Swami, Viren
2009-08-01
The process of cross-cultural migration may be particularly difficult for students travelling overseas for further or higher education, especially where qualitative differences exist between the home and host nations. The present study examined the sociocultural adjustment of sojourning Malaysian students in Britain. Eighty-one Malay and 110 Chinese students enrolled in various courses answered a self-report questionnaire that examined various aspects of sociocultural adjustment. A series of one-way analyses of variance showed that Malay participants experienced poorer sociocultural adjustment in comparison with their Chinese counterparts. They were also less likely than Chinese students to have contact with co-nationals and host nationals, more likely to perceive their actual experience in Britain as worse than they had expected, and more likely to perceive greater cultural distance and greater discrimination. The results of regression analyses showed that, for Malay participants, perceived discrimination accounted for the greatest proportion of variance in sociocultural adjustment (73%), followed by English language proficiency (10%) and contact with host nationals (4%). For Chinese participants, English language proficiency was the strongest predictor of sociocultural adjustment (54%), followed by expectations of life in Britain (18%) and contact with host nationals (3%). By contrast, participants' sex, age, and length of residence failed to emerge as significant predictors for either ethnic group. Possible explanations for this pattern of findings are discussed, including the effects of Islamophobia on Malay-Muslims in Britain, possible socioeconomic differences between Malay and Chinese students, and personality differences between the two ethnic groups. The results are further discussed in relation to practical steps that can be taken to improve the sociocultural adjustment of sojourning students in Britain.
George: Gaussian Process regression
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel
2015-11-01
George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes.
[Understanding logistic regression].
El Sanharawi, M; Naudet, F
2013-10-01
Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.
Li, Lingling; Vollmer, William M; Butler, Melissa G; Wu, Pingsheng; Kharbanda, Elyse O; Wu, Ann Chen
2014-03-01
We compared the impact of 3 confounding adjustment procedures-covariate-adjusted regression, propensity score regression, and high-dimensional propensity score regression-to assess the effects of selected asthma controller medication use (leukotriene antagonists and inhaled corticosteroids) on the following 4 asthma-related adverse outcomes: emergency department visits, hospitalizations, oral corticosteroid use, and the composite outcome of these. We examined a cohort of 24,680 new users who were 4-17 years of age at the incident dispensing from the Population-Based Effectiveness in Asthma and Lung Diseases (PEAL) Network of 5 commercial health plans and TennCare, the Tennessee Medicaid program, during the period January 1, 2004, to December 31, 2010. The 3 methods yielded similar results, indicating that pediatric patients treated with leukotriene antagonists were no more likely than those treated with inhaled corticosteroids to experience adverse outcomes. Children in the TennCare population who had a diagnosis of allergic rhinitis and who then initiated the use of leukotriene antagonists were less likely to experience an asthma-related emergency department visit. A plausible explanation is that our data set is large enough that the 2 advanced propensity score-based analyses do not have advantages over the traditional covariate-adjusted regression approach. We provide important observations on how to correctly apply the methods in observational data analysis and suggest statistical research areas that need more work to guide implementation.
Practical Session: Logistic Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
Relationship between Multiple Regression and Selected Multivariable Methods.
ERIC Educational Resources Information Center
Schumacker, Randall E.
The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…
Ridge Regression: A Regression Procedure for Analyzing correlated Independent Variables
ERIC Educational Resources Information Center
Rakow, Ernest A.
1978-01-01
Ridge regression is a technique used to ameliorate the problem of highly correlated independent variables in multiple regression analysis. This paper explains the fundamentals of ridge regression and illustrates its use. (JKS)
Modern Regression Discontinuity Analysis
ERIC Educational Resources Information Center
Bloom, Howard S.
2012-01-01
This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive…
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An
Monson, Candice M; Macdonald, Alexandra; Vorstenbosch, Valerie; Shnaider, Philippe; Goldstein, Elizabeth S R; Ferrier-Auerbach, Amanda G; Mocciola, Katharine E
2012-10-01
The current study sought to determine if different spheres of social adjustment, social and leisure, family, and work and income improved immediately following a course of cognitive processing therapy (CPT) when compared with those on a waiting list in a sample of 46 U.S. veterans diagnosed with posttraumatic stress disorder (PTSD). We also sought to determine whether changes in different PTSD symptom clusters were associated with changes in these spheres of social adjustment. Overall social adjustment, extended family relationships, and housework completion significantly improved in the CPT versus waiting-list condition, η(2) = .08 to .11. Hierarchical multiple regression analyses revealed that improvements in total clinician-rated PTSD symptoms were associated with improvements in overall social and housework adjustment. When changes in reexperiencing, avoidance, emotional numbing, and hyperarousal were all in the model accounting for changes in total social adjustment, improvements in emotional numbing symptoms were associated with improvements in overall social, extended family, and housework adjustment (β = .38 to .55). In addition, improvements in avoidance symptoms were associated with improvements in housework adjustment (β = .30), but associated with declines in extended family adjustment (β = -.34). Results suggest that it is important to consider the extent to which PTSD treatments effectively reduce specific types of symptoms, particularly emotional numbing and avoidance, to generally improve social adjustment.
Chen, X; Liu, M; Li, D
2000-09-01
A sample of children, initially 12 years old, in the People's Republic of China participated in this 2-year longitudinal study. Data on parental warmth, control, and indulgence were collected from children's self-reports. Information concerning social, academic, and psychological adjustment was obtained from multiple sources. The results indicated that parenting styles might be a function of child gender and change with age. Regression analyses revealed that parenting styles of fathers and mothers predicted different outcomes. Whereas maternal warmth had significant contributions to the prediction of emotional adjustment, paternal warmth significantly predicted later social and school achievement. It was also found that paternal, but not maternal, indulgence significantly predicted children's adjustment difficulties. The contributions of the parenting variables might be moderated by the child's initial conditions.
Commonality Analysis for the Regression Case.
ERIC Educational Resources Information Center
Murthy, Kavita
Commonality analysis is a procedure for decomposing the coefficient of determination (R superscript 2) in multiple regression analyses into the percent of variance in the dependent variable associated with each independent variable uniquely, and the proportion of explained variance associated with the common effects of predictors in various…
Calculating a Stepwise Ridge Regression.
ERIC Educational Resources Information Center
Morris, John D.
1986-01-01
Although methods for using ordinary least squares regression computer programs to calculate a ridge regression are available, the calculation of a stepwise ridge regression requires a special purpose algorithm and computer program. The correct stepwise ridge regression procedure is given, and a parallel FORTRAN computer program is described.…
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Steganalysis using logistic regression
NASA Astrophysics Data System (ADS)
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Kramer, S.
1996-12-31
In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.
Pisanti, Renato; Lombardo, Caterina; Luszczynska, Aleksandra; Poli, Luca; Bennardi, Linda; Giordanengo, Luca; Berloco, Pasquale Bartolomeo; Violani, Cristiano
2016-11-09
This study examined the relations between appraisal of transplant-related stressors, coping, and adjustment dimensions following kidney transplantation (KT). Two models were tested: (1) the main effects model proposing that stress appraisal and coping strategies are directly associated with adjustment dimensions; and (2) the moderating model of stress proposing that each coping strategy interacts with stress appraisal. Importantly, there is a lack of research examining the two models simultaneously among recipients of solid organ transplantation. A total of 174 KT recipients completed the questionnaires. Predictors of post-transplant adjustment included appraisal of transplant-related stressors and coping strategies (task-, emotion-, and avoidance-focused). Adjustment dimensions were psychological distress, worries about the transplant, feelings of guilt, fear of disclosure of transplant, adherence, and responsibility for the functioning of the new organ. The main and moderating effects were tested with regression analyses. Appraisal of transplant-related stressors and emotion-oriented coping were related to all adjustment dimensions, except of adherence and responsibility. Task-oriented coping was positively related to responsibility. Avoidance-oriented coping was negatively correlated with adherence. Only 1 out of 18 hypothesized interactive terms was significant, yielding a synergistic interaction between appraisal of transplant-related stressors and emotion-oriented coping on the sense of guilt. The findings have the potential to inform interventions promoting psychosocial adjustment among KT recipients.
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
Current navigation requirements depend on a geometric dilution of precision (GDOP) criterion. As long as the GDOP stays below a specific value, navigation requirements are met. The GDOP will exceed the specified value when the measurement geometry becomes too collinear. A new signal processing technique, called Ridge Regression Processing, can reduce the effects of nearly collinear measurement geometry; thereby reducing the inflation of the measurement errors. It is shown that the Ridge signal processor gives a consistently better mean squared error (MSE) in position than the Ordinary Least Mean Squares (OLS) estimator. The applicability of this technique is currently being investigated to improve the following areas: receiver autonomous integrity monitoring (RAIM), coverage requirements, availability requirements, and precision approaches.
Weine, Stevan Merrill; Ware, Norma; Tugenberg, Toni; Hakizimana, Leonce; Dahnweih, Gonwo; Currie, Madeleine; Wagner, Maureen; Levin, Elise
2013-01-01
Objectives The purpose of this mixed method study was to characterize the patterns of psychosocial adjustment among adolescent African refugees in U.S. resettlement. Methods A purposive sample of 73 recently resettled refugee adolescents from Burundi and Liberia were followed for two years and qualitative and quantitative data was analyzed using a mixed methods exploratory design. Results Protective resources identified were the family and community capacities that can promote youth psychosocial adjustment through: 1) Finances for necessities; 2) English proficiency; 3) Social support networks; 4) Engaged parenting; 5) Family cohesion; 6) Cultural adherence and guidance; 7) Educational support; and, 8) Faith and religious involvement. The researchers first inductively identified 19 thriving, 29 managing, and 25 struggling youths based on review of cases. Univariate analyses then indicated significant associations with country of origin, parental education, and parental employment. Multiple regressions indicated that better psychosocial adjustment was associated with Liberians and living with both parents. Logistic regressions showed that thriving was associated with Liberians and higher parental education, managing with more parental education, and struggling with Burundians and living parents. Qualitative analysis identified how these factors were proxy indicators for protective resources in families and communities. Conclusion These three trajectories of psychosocial adjustment and six domains of protective resources could assist in developing targeted prevention programs and policies for refugee youth. Further rigorous longitudinal mixed-methods study of adolescent refugees in U.S. resettlement are needed. PMID:24205467
Physical Discipline and Children's Adjustment: Cultural Normativeness as a Moderator
Lansford, Jennifer E.; Chang, Lei; Dodge, Kenneth A.; Malone, Patrick S.; Oburu, Paul; Palmérus, Kerstin; Bacchini, Dario; Pastorelli, Concetta; Bombi, Anna Silvia; Zelli, Arnaldo; Tapanya, Sombat; Chaudhary, Nandita; Deater-Deckard, Kirby; Manke, Beth; Quinn, Naomi
2009-01-01
Interviews were conducted with 336 mother – child dyads (children's ages ranged from 6 to 17 years; mothers' ages ranged from 20 to 59 years) in China, India, Italy, Kenya, the Philippines, and Thailand to examine whether normativeness of physical discipline moderates the link between mothers' use of physical discipline and children's adjustment. Multilevel regression analyses revealed that physical discipline was less strongly associated with adverse child outcomes in conditions of greater perceived normativeness, but physical discipline was also associated with more adverse outcomes regardless of its perceived normativeness. Countries with the lowest use of physical discipline showed the strongest association between mothers' use and children's behavior problems, but in all countries higher use of physical discipline was associated with more aggression and anxiety. PMID:16274437
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In
Orbe, Jesus; Ferreira, Eva; Núñez-Antón, Vicente
2003-01-01
In this work we study the effect of several covariates on a censored response variable with unknown probability distribution. A semiparametric model is proposed to consider situations where the functional form of the effect of one or more covariates is unknown, as is the case in the application presented in this work. We provide its estimation procedure and, in addition, a bootstrap technique to make inference on the parameters. A simulation study has been carried out to show the good performance of the proposed estimation process and to analyse the effect of the censorship. Finally, we present the results when the methodology is applied to AIDS diagnosed patients.
Morris, Amanda Sheffield; John, Aesha; Halliburton, Amy L.; Morris, Michael D. S.; Robinson, Lara R.; Myers, Sonya S.; Aucoin, Katherine J.; Keyes, Angela W.; Terranova, Andrew
2013-01-01
This study examined the role of effortful control, behavior problems, and peer relations in the academic adjustment of 74 kindergarten children from primarily low-income families using a short-term longitudinal design. Teachers completed standardized measures of children’s effortful control, internalizing and externalizing problems, school readiness, and academic skills. Children participated in a sociometric interview to assess peer relations. Research Findings: Correlational analyses indicate that children’s effortful control, behavior problems in school, and peer relations are associated with academic adjustment variables at the end of the school year, including school readiness, reading skills, and math skills. Results of regression analyses indicate that household income and children’s effortful control primarily account for variation in children’s academic adjustment. The associations between children’s effortful control and academic adjustment did not vary across sex of the child or ethnicity. Mediational analyses indicate an indirect effect of effortful control on school readiness, through children’s internalizing problems. Practice or Policy: Effortful control emerged as a strong predictor of academic adjustment among kindergarten children from low-income families. Strategies for enhancing effortful control and school readiness among low-income children are discussed. PMID:24163572
Morris, Amanda Sheffield; John, Aesha; Halliburton, Amy L; Morris, Michael D S; Robinson, Lara R; Myers, Sonya S; Aucoin, Katherine J; Keyes, Angela W; Terranova, Andrew
2013-01-01
This study examined the role of effortful control, behavior problems, and peer relations in the academic adjustment of 74 kindergarten children from primarily low-income families using a short-term longitudinal design. Teachers completed standardized measures of children's effortful control, internalizing and externalizing problems, school readiness, and academic skills. Children participated in a sociometric interview to assess peer relations. Research Findings: Correlational analyses indicate that children's effortful control, behavior problems in school, and peer relations are associated with academic adjustment variables at the end of the school year, including school readiness, reading skills, and math skills. Results of regression analyses indicate that household income and children's effortful control primarily account for variation in children's academic adjustment. The associations between children's effortful control and academic adjustment did not vary across sex of the child or ethnicity. Mediational analyses indicate an indirect effect of effortful control on school readiness, through children's internalizing problems. Practice or Policy: Effortful control emerged as a strong predictor of academic adjustment among kindergarten children from low-income families. Strategies for enhancing effortful control and school readiness among low-income children are discussed.
Strategies for Detecting Outliers in Regression Analysis: An Introductory Primer.
ERIC Educational Resources Information Center
Evans, Victoria P.
Outliers are extreme data points that have the potential to influence statistical analyses. Outlier identification is important to researchers using regression analysis because outliers can influence the model used to such an extent that they seriously distort the conclusions drawn from the data. The effects of outliers on regression analysis are…
Streamflow forecasting using functional regression
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
Survival analysis and Cox regression.
Benítez-Parejo, N; Rodríguez del Águila, M M; Pérez-Vicente, S
2011-01-01
The data provided by clinical trials are often expressed in terms of survival. The analysis of survival comprises a series of statistical analytical techniques in which the measurements analysed represent the time elapsed between a given exposure and the outcome of a certain event. Despite the name of these techniques, the outcome in question does not necessarily have to be either survival or death, and may be healing versus no healing, relief versus pain, complication versus no complication, relapse versus no relapse, etc. The present article describes the analysis of survival from both a descriptive perspective, based on the Kaplan-Meier estimation method, and in terms of bivariate comparisons using the log-rank statistic. Likewise, a description is provided of the Cox regression models for the study of risk factors or covariables associated to the probability of survival. These models are defined in both simple and multiple forms, and a description is provided of how they are calculated and how the postulates for application are checked - accompanied by illustrating examples with the shareware application R.
Ingoldsby, Erin M; Kohl, Gwynne O; McMahon, Robert J; Lengua, Liliana
2006-10-01
The present study investigated patterns in the development of conduct problems (CP), depressive symptoms, and their co-occurrence, and relations to adjustment problems, over the transition from late childhood to early adolescence. Rates of depressive symptoms and CP during this developmental period vary by gender; yet, few studies involving non-clinical samples have examined co-occurring problems and adjustment outcomes across boys and girls. This study investigates the manifestation and change in CP and depressive symptom patterns in a large, multisite, gender-and ethnically-diverse sample of 431 youth from 5th to 7th grade. Indicators of CP, depressive symptoms, their co-occurrence, and adjustment outcomes were created from multiple reporters and measures. Hypotheses regarding gender differences were tested utilizing both categorical (i.e., elevated symptom groups) and continuous analyses (i.e., regressions predicting symptomatology and adjustment outcomes). Results were partially supportive of the dual failure model (Capaldi, 1991, 1992), with youth with co-occurring problems in 5th grade demonstrating significantly lower academic adjustment and social competence two years later. Both depressive symptoms and CP were risk factors for multiple negative adjustment outcomes. Co-occurring symptomatology and CP demonstrated more stability and was associated with more severe adjustment problems than depressive symptoms over time. Categorical analyses suggested that, in terms of adjustment problems, youth with co-occurring symptomatology were generally no worse off than those with CP-alone, and those with depressive symptoms-alone were similar over time to those showing no symptomatology at all. Few gender differences were noted in the relations among CP, depressive symptoms, and adjustment over time.
Ingoldsby, Erin M.; Kohl, Gwynne O.; McMahon, Robert J.; Lengua, Liliana
2009-01-01
The present study investigated patterns in the development of conduct problems (CP), depressive symptoms, and their co-occurrence, and relations to adjustment problems, over the transition from late childhood to early adolescence. Rates of depressive symptoms and CP during this developmental period vary by gender, yet, few studies involving non-clinical samples have examined co-occurring problems and adjustment outcomes across boys and girls. This study investigates the manifestation and change in CP and depressive symptom patterns in a large, multisite, gender- and ethnically-diverse sample of 431 youth from 5th to 7th grade. Indicators of CP, depressive symptoms, their co-occurrence, and adjustment outcomes were created from multiple reporters and measures. Hypotheses regarding gender differences were tested utilizing both categorical (i.e., elevated symptom groups) and continuous analyses (i.e., regressions predicting symptomatology and adjustment outcomes). Results were partially supportive of the dual failure model (Capaldi, 1991, 1992), with youth with co-occurring problems in 5th grade demonstrating significantly lower academic adjustment and social competence two years later. Both depressive symptoms and CP were risk factors for multiple negative adjustment outcomes. Co-occurring symptomatology and CP demonstrated more stability and was associated with more severe adjustment problems than depressive symptoms over time. Categorical analyses suggested that, in terms of adjustment problems, youth with co-occurring symptomatology were generally no worse off than those with CP-alone, and those with depressive symptoms-alone were similar over time to those showing no symptomatology at all. Few gender differences were noted in the relations among CP, depressive symptoms, and adjustment over time. PMID:16967336
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Hussey, Michael A; Koch, Gary G; Preisser, John S; Saville, Benjamin R
2016-01-01
Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
ERIC Educational Resources Information Center
Van Galen, Jane, Ed.; And Others
1992-01-01
This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E.…
Error bounds in cascading regressions
Karlinger, M.R.; Troutman, B.M.
1985-01-01
Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.
Transfer Learning Based on Logistic Regression
NASA Astrophysics Data System (ADS)
Paul, A.; Rottensteiner, F.; Heipke, C.
2015-08-01
In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.
A tutorial on Bayesian Normal linear regression
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens
2015-12-01
Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.
Scher, Christine D; Ellwanger, Joel
2009-10-01
This study builds upon current understanding of risk and protective factors for post-disaster adjustment by examining relationships between disaster-related cognitions, three empirically supported risk factors for poorer adjustment (i.e., greater disaster impact, female gender, and racial/ethnic minority status), and three common post-disaster outcomes (i.e., depression, anxiety, and somatic complaints). Participants were 200 students exposed to wildfire disaster. Simultaneous hierarchical regression analyses revealed that, during the acute stress period: (1) disaster-related cognitions in interaction with fire impact and minority status, as well as gender, were related to anxiety symptoms, (2) cognitions were related to depression symptoms, and (3) cognitions in interaction with minority status, as well as fire impact, were related to somatic symptoms. No examined variables predicted symptom change.
Silverthorn, N A; Gekoski, W L
1995-03-01
Results of regression analyses on data from 96 first-year undergraduates indicated that social desirability (Jackson and Marlowe-Crowne Social Desirability Scales), particularly scores on the Jackson scale, is related strongly to scores on measures of adjustment (Student Adaptation to College Questionnaire), self-efficacy (Hale-Fibel Generalized Expectation for Success Scale), and independence (Psychological Separation Inventory) from mother, but not from father. In addition, both the Jackson and Marlowe-Crowne scales were correlated highly. Independence from parents and self-efficacy each continued to show a relationship with adjustment to university after social desirability effects were removed. Failure to remove the effect(s) of social desirability from the present measures is likely to lead to inflated estimates of their relation to each other or to other measures.
Inflation Adjustments for Defense Acquisition
2014-10-01
Harmon Daniel B. Levine Stanley A. Horowitz, Project Leader INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Approved...T U T E F O R D E F E N S E A N A L Y S E S IDA Document D-5112 Inflation Adjustments for Defense Acquisition Bruce R. Harmon Daniel B. Levine...might do a better job? The focus of the study is on aircraft procurement. By way of terminology , “cost index,” “price index,” and “deflator” are used
Regression models of sprint, vertical jump, and change of direction performance.
Swinton, Paul A; Lloyd, Ray; Keogh, Justin W L; Agouris, Ioannis; Stewart, Arthur D
2014-07-01
It was the aim of the present study to expand on previous correlation analyses that have attempted to identify factors that influence performance of jumping, sprinting, and changing direction. This was achieved by using a regression approach to obtain linear models that combined anthropometric, strength, and other biomechanical variables. Thirty rugby union players participated in the study (age: 24.2 ± 3.9 years; stature: 181.2 ± 6.6 cm; mass: 94.2 ± 11.1 kg). The athletes' ability to sprint, jump, and change direction was assessed using a 30-m sprint, vertical jump, and 505 agility test, respectively. Regression variables were collected during maximum strength tests (1 repetition maximum [1RM] deadlift and squat) and performance of fast velocity resistance exercises (deadlift and jump squat) using submaximum loads (10-70% 1RM). Force, velocity, power, and rate of force development (RFD) values were measured during fast velocity exercises with the greatest values produced across loads selected for further analysis. Anthropometric data, including lengths, widths, and girths were collected using a 3-dimensional body scanner. Potential regression variables were first identified using correlation analyses. Suitable variables were then regressed using a best subsets approach. Three factor models generally provided the most appropriate balance between explained variance and model complexity. Adjusted R values of 0.86, 0.82, and 0.67 were obtained for sprint, jump, and change of direction performance, respectively. Anthropometric measurements did not feature in any of the top models because of their strong association with body mass. For each performance measure, variance was best explained by relative maximum strength. Improvements in models were then obtained by including velocity and power values for jumping and sprinting performance, and by including RFD values for change of direction performance.
Logistic Regression: Concept and Application
ERIC Educational Resources Information Center
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Precision Efficacy Analysis for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.
When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…
Model building in nonproportional hazard regression.
Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad
2013-12-30
Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses.
An Empirical Study of Eight Nonparametric Tests in Hierarchical Regression.
ERIC Educational Resources Information Center
Harwell, Michael; Serlin, Ronald C.
When normality does not hold, nonparametric tests represent an important data-analytic alternative to parametric tests. However, the use of nonparametric tests in educational research has been limited by the absence of easily performed tests for complex experimental designs and analyses, such as factorial designs and multiple regression analyses,…
Remotely Adjustable Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.; Gardner, L. D.
1987-01-01
Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.
NASA Technical Reports Server (NTRS)
Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.
1991-01-01
Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Regional regression of flood characteristics employing historical information
Tasker, Gary D.; Stedinger, J.R.
1987-01-01
Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.
Multiple Regression and Its Discontents
ERIC Educational Resources Information Center
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Wrong Signs in Regression Coefficients
NASA Technical Reports Server (NTRS)
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
Montgomery, Katherine L; Vaughn, Michael G; Thompson, Sanna J; Howard, Matthew O
2013-11-01
Research on juvenile offenders has largely treated this population as a homogeneous group. However, recent findings suggest that this at-risk population may be considerably more heterogeneous than previously believed. This study compared mixture regression analyses with standard regression techniques in an effort to explain how known factors such as distress, trauma, and personality are associated with drug abuse among juvenile offenders. Researchers recruited 728 juvenile offenders from Missouri juvenile correctional facilities for participation in this study. Researchers investigated past-year substance use in relation to the following variables: demographic characteristics (gender, ethnicity, age, familial use of public assistance), antisocial behavior, and mental illness symptoms (psychopathic traits, psychiatric distress, and prior trauma). Results indicated that standard and mixed regression approaches identified significant variables related to past-year substance use among this population; however, the mixture regression methods provided greater specificity in results. Mixture regression analytic methods may help policy makers and practitioners better understand and intervene with the substance-related subgroups of juvenile offenders.
Recirculating valve lash adjuster
Stoody, R.R.
1987-02-24
This patent describes an internal combustion engine with a valve assembly of the type including overhead valves supported by a cylinder head for opening and closing movements in a substantially vertical direction and a rotatable overhead camshaft thereabove lubricated by engine oil pumped by an engine oil pump. A hydraulic lash adjuster with an internal reservoir therein is solely supplied with run-off lubricating oil from the camshaft which oil is pumped into the internal reservoir of the lash adjuster by self-pumping operation of the lash adjuster produced by lateral forces thereon by the rotative operation of the camshaft comprising: a housing of the lash adjuster including an axially extending bore therethrough with a lower wall means of the housing closing the lower end thereof; a first plunger member being closely slidably received in the bore of the housing and having wall means defining a fluid filled power chamber with the lower wall means of the housing; and a second plunger member of the lash adjuster having a portion being loosely slidably received and extending into the bore of the housing for reciprocation therein. Another portion extends upwardly from the housing to operatively receive alternating side-to-side force inputs from operation of the camshaft.
Age-adjusted mortality and its association to variations in urban conditions in Shanghai.
Takano, Takehito; Fu, Jia; Nakamura, Keiko; Uji, Kazuyuki; Fukuda, Yoshiharu; Watanabe, Masafumi; Nakajima, Hiroshi
2002-09-01
The objective of this study was to explore the association between health and urbanization in a megacity, Shanghai, by calculating the age-adjusted mortality ratio by ward-unit of Shanghai and by examining relationships between mortalities and urban indicators. Crude mortality rates and age-adjusted mortality ratios by ward-unit were calculated. Demographic, residential environment, healthcare, and socioeconomic indicators were formulated for each of the ward-units between 1995 and 1998. Correlation and Poisson regression analyses were performed to examine the association between urban indicators and mortalities. The crude mortality rate by ward-unit in 1997 varied from 6.3 to 9.4 deaths per 1000 population. The age-adjusted mortality ratio in 1997 by ward-units as reference to the average mortality of urban China varied from 57.8 to 113.3 within Shanghai. Age-adjusted mortalities were inversely related with indicators of a larger floor space of dwellings per population, a larger proportion of parks, gardens, and green areas to total land area; a greater number of health professionals per population; and a greater number of employees in retail business per population. Spacious living showed independent association to a higher standard of community health in Shanghai (P < 0.05). Consequences of health policy and the developments of urban infrastructural resources from the viewpoint of the Healthy Cities concept were discussed.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Incremental learning for ν-Support Vector Regression.
Gu, Bin; Sheng, Victor S; Wang, Zhijie; Ho, Derek; Osman, Said; Li, Shuo
2015-07-01
The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts.
Baurain, Céline; Nader-Grosbois, Nathalie; Dionne, Carmen
2013-09-01
This study examined the extent to which socio-emotional regulation displayed in three dyadic interactive play contexts (neutral, competitive or cooperative) by 45 children with intellectual disability compared with 45 typically developing children (matched on developmental age, ranging from 3 to 6 years) is linked with the teachers' perceptions of their social adjustment. A Coding Grid of Socio-Emotional Regulation by Sequences (Baurain & Nader-Grosbois, 2011b, 2011c) focusing on Emotional Expression, Social Behavior and Behavior toward Social Rules in children was applied. The Social Adjustment for Children Scale (EASE, Hugues, Soares-Boucaud, Hochman, & Frith, 1997) and the Assessment, Evaluation and Intervention Program System (AEPS, Bricker, 2002) were completed by teachers. Regression analyses emphasized, in children with intellectual disability only, a positive significant link between their Behavior toward Social Rules in interactive contexts and the teachers' perceptions of their social adjustment. Children with intellectual disabilities who listen to and follow instructions, who are patient in waiting for their turn, and who moderate their externalized behavior are perceived by their teachers as socially adapted in their daily social relationships. The between-groups dissimilarity in the relational patterns between abilities in socio-emotional regulation and social adjustment supports the "structural difference hypothesis" with regard to the group with intellectual disability, compared with the typically developing group. Hierarchical cluster cases analyses identified distinct subgroups showing variable structural patterns between the three specific categories of abilities in socio-emotional regulation and their levels of social adjustment perceived by teachers. In both groups, several abilities in socio-emotional regulation and teachers' perceptions of social adjustment vary depending on children's developmental age. Chronological age in children with
Eugster, Patrick; Sennhauser, Michèle; Zweifel, Peter
2010-07-01
When premiums are community-rated, risk adjustment (RA) serves to mitigate competitive insurers' incentive to select favorable risks. However, unless fully prospective, it also undermines their incentives for efficiency. By capping its volume, one may try to counteract this tendency, exposing insurers to some financial risk. This in term runs counter the quest to refine the RA formula, which would increase RA volume. Specifically, the adjuster, "Hospitalization or living in a nursing home during the previous year" will be added in Switzerland starting 2012. This paper investigates how to minimize the opportunity cost of capping RA in terms of increased incentives for risk selection.
Identification of high leverage points in binary logistic regression
NASA Astrophysics Data System (ADS)
Fitrianto, Anwar; Wendy, Tham
2016-10-01
Leverage points are those which measures uncommon observations in x space of regression diagnostics. Detection of high leverage points plays a vital role because it is responsible in masking outlier. In regression, high observations which made at extreme in the space of explanatory variables and they are far apart from the average of the data are called as leverage points. In this project, a method for identification of high leverage point in logistic regression was shown using numerical example. We investigate the effects of high leverage point in the logistic regression model. The comparison of the result in the model with and without leverage model is being discussed. Some graphical analyses based on the result of the analysis are presented. We found that the presence of HLP have effect on the hii, estimated probability, estimated coefficients, p-value of variable, odds ratio and regression equation.
Kang, Dae Ryong; Yadav, Dhananjay; Koh, Sang-Baek; Kim, Jang-Young
2017-01-01
Purpose The ratio of serum leptin to adiponectin (L/A ratio) could be used as a marker for insulin resistance. However, few prospective studies have investigated the impact of L/A ratio on improvement of metabolic components in high-risk individuals with metabolic syndrome. We examined the association between L/A ratio and the regression of metabolic syndrome in a population-based longitudinal study. Materials and Methods A total of 1017 subjects (431 men and 586 women) with metabolic syndrome at baseline (2005–2008) were examined and followed (2008–2011). Baseline serum levels of leptin and adiponectin were analyzed by radioimmunoassay. Area under the receiver operating characteristics curve (AUROC) analyses were used to assess the predictive ability of L/A ratio for the regression of metabolic syndrome. Results During an average of 2.8 years of follow-up, metabolic syndrome disappeared in 142 men (32.9%) and 196 women (33.4%). After multivariable adjustment, the odds ratios (95% confidence interval) for regression of metabolic syndrome in comparisons of the lowest to the highest tertiles of L/A ratio were 1.84 (1.02–3.31) in men and 2.32 (1.37–3.91) in women. In AUROC analyses, L/A ratio had a greater predictive power than serum adiponectin for the regression of metabolic syndrome in both men (p=0.024) and women (p=0.019). Conclusion Low L/A ratio is a predictor for the regression of metabolic syndrome. The L/A ratio could be a useful clinical marker for management of high-risk individuals with metabolic syndrome. PMID:28120564
Soh, Chang-Heok; Harrington, David P; Zaslavsky, Alan M
2008-03-01
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.
XRA image segmentation using regression
NASA Astrophysics Data System (ADS)
Jin, Jesse S.
1996-04-01
Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.
Psychological Adjustment and Homosexuality.
ERIC Educational Resources Information Center
Gonsiorek, John C.
In this paper, the diverse literature bearing on the topic of homosexuality and psychological adjustment is critically reviewed and synthesized. The first chapter discusses the most crucial methodological issue in this area, the problem of sampling. The kinds of samples used to date are critically examined, and some suggestions for improved…
NASA Technical Reports Server (NTRS)
1986-01-01
Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.
Hunter, Steven L.
2002-01-01
An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.
Serghiou, Stylianos; Patel, Chirag J.; Tan, Yan Yu; Koay, Peter; Ioannidis, John P.A.
2016-01-01
Objectives Instead of evaluating one risk factor at a time, we illustrate the utility of “field-wide meta-analyses” in considering all available data on all putative risk factors of a disease simultaneously. Study Design and Setting We identified studies on putative risk factors of pterygium (surfer’s eye) in PubMed, EMBASE, and Web of Science. We mapped which factors were considered, reported, and adjusted for in each study. For each putative risk factor, four meta-analyses were done using univariate only, multivariate only, preferentially univariate, or preferentially multivariate estimates. Results A total of 2052 records were screened to identify 60 eligible studies reporting on 65 putative risk factors. Only 4 of 60 studies reported both multivariate and univariate regression analyses. None of the 32 studies using multivariate analysis adjusted for the same set of risk factors. Effect sizes from different types of regression analyses led to significantly different summary effect sizes (P-value < 0.001). Observed heterogeneity was very high for both multivariate (median I2, 76.1%) and univariate (median I2, 85.8%) estimates. No single study investigated all 11 risk factors that were statistically significant in at least one of our meta-analyses. Conclusion Field-wide meta-analyses can map availability of risk factors and trends in modeling, adjustments and reporting, as well as the impact of differences in model specification. PMID:26415577
Regressive evolution in Astyanax cavefish.
Jeffery, William R
2009-01-01
A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment.
Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis
ERIC Educational Resources Information Center
Kim, Rae Seon
2011-01-01
When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…
Cactus: An Introduction to Regression
ERIC Educational Resources Information Center
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Multiple Regression: A Leisurely Primer.
ERIC Educational Resources Information Center
Daniel, Larry G.; Onwuegbuzie, Anthony J.
Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…
Weighting Regressions by Propensity Scores
ERIC Educational Resources Information Center
Freedman, David A.; Berk, Richard A.
2008-01-01
Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors
Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David
2013-01-01
We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.
10 CFR 436.22 - Adjusted internal rate of return.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.22 Adjusted internal rate of return. The adjusted internal rate of return is the overall rate of return on an energy or water conservation measure... yearly net savings in energy or water and non-fuel or non-water operation and maintenance...
10 CFR 436.22 - Adjusted internal rate of return.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.22 Adjusted internal rate of return. The adjusted internal rate of return is the overall rate of return on an energy or water conservation measure... yearly net savings in energy or water and non-fuel or non-water operation and maintenance...
Tavernier, Royette; Munroe, Melanie; Willoughby, Teena
2015-01-01
Past research has consistently found that evening-types typically report poorer academic adjustment and higher levels of substance use compared to morning-types. An important development within the morningness-eveningness and psychosocial adjustment literature has been the hypothesis that social jetlag (i.e. the asynchrony between an individual's "biological" and "social" clocks) is one factor that may explain why evening-types are at a greater risk for negative psychosocial adjustment. Yet, only a handful of studies have assessed social jetlag. Furthermore, the few studies that have assessed social jetlag have done so only with concurrent data, and thus have not been able to determine the direction of effects among morningness-eveningness, social jetlag and psychosocial adjustment. To address this important gap in the literature, the present 3-year longitudinal study employed the use of a cross-lagged auto-regressive model to specifically examine the predictive role of perceived morningness-eveningness and social jetlag on two important indices of psychosocial adjustment among university students: academic adjustment and substance use. We also assessed whether there would be an indirect effect between perceived morningness-eveningness and psychosocial adjustment through social jetlag. Participants were 942 (71.5% female; M = 19 years, SD = 0.90) undergraduates at a mid-sized university in Southern Ontario, Canada, who completed a survey at three assessments, each one year apart, beginning in first-year university. Measures were demographics (age, gender and parental education), sleep problems, perceived morningness-eveningness, social jetlag, academic adjustment and substance use. As hypothesized, results of path analyses indicated that a greater perceived eveningness preference significantly predicted higher social jetlag, poorer academic adjustment and higher substance use over time. In contrast, we found no support for social jetlag as a predictor of
Combining biomarkers for classification with covariate adjustment.
Kim, Soyoung; Huang, Ying
2017-03-09
Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate-adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate-adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression-model-based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression-model-based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.
3D Regression Heat Map Analysis of Population Study Data.
Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard
2016-01-01
Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease.
Interaction Models for Functional Regression
USSET, JOSEPH; STAICU, ANA-MARIA; MAITY, ARNAB
2015-01-01
A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data. PMID:26744549
Measurements of Glacial Isostatic Adjustment in Greenland
NASA Astrophysics Data System (ADS)
Khan, Shfaqat Abbas; Bamber, Jonathan; Bevis, Michael; Wahr, John; van dam, Tonie; Wouters, Bert; Willis, Michael
2015-04-01
The Greenland GPS network (GNET) was constructed to provide a new means to assess viscoelastic and elastic adjustments driven by past and present-day changes in ice mass. Here we assess existing glacial isostatic adjustments (GIA) models by analysing 1995-present data from 61 continuous GPS receivers located along the edge of the Greenland ice sheet. Since GPS receivers measure both the GIA and elastic signal, we isolate the GIA signal, by removing the elastic adjustments of the crust due to present-day mass loss using high-resolution ice surface elevation change grids derived from satellite and airborne altimetry measurements (ERS1/2, ICESat, ATM, ENVISAT, and CryoSat-2). In general, our observed GIA rates contradict models, suggesting GIA models and hence their ice load history for Greenland are not well constrained.
Potgieter, Cornelis J.; Wei, Rubin; Kipnis, Victor; Freedman, Laurence S.; Carroll, Raymond J.
2016-01-01
Summary For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable’s relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index - 2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods. PMID:27061196
Astronomical Methods for Nonparametric Regression
NASA Astrophysics Data System (ADS)
Steinhardt, Charles L.; Jermyn, Adam
2017-01-01
I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1994-01-01
Local characteristics of fabrics varied to suit special applications. Adjustable reed machinery proposed for use in weaving fabrics in various net shapes, widths, yarn spacings, and yarn angles. Locations of edges of fabric and configuration of warp and filling yarns varied along fabric to obtain specified properties. In machinery, reed wires mounted in groups on sliders, mounted on lengthwise rails in reed frame. Mechanisms incorporated to move sliders lengthwise, parallel to warp yarns, by sliding them along rails; move sliders crosswise by translating reed frame rails perpendicular to warp yarns; and crosswise by spreading reed rails within group. Profile of reed wires in group on each slider changed.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership
ERIC Educational Resources Information Center
Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.
2011-01-01
The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…
Effects of Relational Authenticity on Adjustment to College
ERIC Educational Resources Information Center
Lenz, A. Stephen; Holman, Rachel L.; Lancaster, Chloe; Gotay, Stephanie G.
2016-01-01
The authors examined the association between relational health and student adjustment to college. Data were collected from 138 undergraduate students completing their 1st semester at a large university in the mid-southern United States. Regression analysis indicated that higher levels of relational authenticity were a predictor of success during…
Interpreting Multiple Linear Regression: A Guidebook of Variable Importance
ERIC Educational Resources Information Center
Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim
2012-01-01
Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…
Continuously adjustable Pulfrich spectacles
NASA Astrophysics Data System (ADS)
Jacobs, Ken; Karpf, Ron
2011-03-01
A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.
Efforts to adjust for confounding by neighborhood using complex survey data.
Brumback, Babette A; Dailey, Amy B; He, Zhulin; Brumback, Lyndia C; Livingston, Melvin D
2010-08-15
In social epidemiology, one often considers neighborhood or contextual effects on health outcomes, in addition to effects of individual exposures. This paper is concerned with the estimation of an individual exposure effect in the presence of confounding by neighborhood effects, motivated by an analysis of National Health Interview Survey (NHIS) data. In the analysis, we operationalize neighborhood as the secondary sampling unit of the survey, which consists of small groups of neighboring census blocks. Thus the neighborhoods are sampled with unequal probabilities, as are individuals within neighborhoods. We develop and compare several approaches for the analysis of the effect of dichotomized individual-level education on the receipt of adequate mammography screening. In the analysis, neighborhood effects are likely to confound the individual effects, due to such factors as differential availability of health services and differential neighborhood culture. The approaches can be grouped into three broad classes: ordinary logistic regression for survey data, with either no effect or a fixed effect for each cluster; conditional logistic regression extended for survey data; and generalized linear mixed model (GLMM) regression for survey data. Standard use of GLMMs with small clusters fails to adjust for confounding by cluster (e.g. neighborhood); this motivated us to develop an adaptation. We use theory, simulation, and analyses of the NHIS data to compare and contrast all of these methods. One conclusion is that all of the methods perform poorly when the sampling bias is strong; more research and new methods are clearly needed.
An assessment of precipitation adjustment and feedback computation methods
NASA Astrophysics Data System (ADS)
Richardson, T. B.; Samset, B. H.; Andrews, T.; Myhre, G.; Forster, P. M.
2016-10-01
The precipitation adjustment and feedback framework is a useful tool for understanding global and regional precipitation changes. However, there is no definitive method for making the decomposition. In this study we highlight important differences which arise in results due to methodological choices. The responses to five different forcing agents (CO2, CH4, SO4, black carbon, and solar insolation) are analyzed using global climate model simulations. Three decomposition methods are compared: using fixed sea surface temperature experiments (fSST), regressing transient climate change after an abrupt forcing (regression), and separating based on timescale using the first year of coupled simulations (YR1). The YR1 method is found to incorporate significant SST-driven feedbacks into the adjustment and is therefore not suitable for making the decomposition. Globally, the regression and fSST methods produce generally consistent results; however, the regression values are dependent on the number of years analyzed and have considerably larger uncertainties. Regionally, there are substantial differences between methods. The pattern of change calculated using regression reverses sign in many regions as the number of years analyzed increases. This makes it difficult to establish what effects are included in the decomposition. The fSST method provides a more clear-cut separation in terms of what physical drivers are included in each component. The fSST results are less affected by methodological choices and exhibit much less variability. We find that the precipitation adjustment is weakly affected by the choice of SST climatology.
Predictors of Psychological Adjustment Among Homeless and Housed Female Youth
Votta, Elizabeth; Farrell, Susan
2009-01-01
Objective: This cross-sectional study explored differences in the impact of self-reported coping style, self-esteem and perceived support on the psychological adjustment of homeless and housed female youth. Method: Data were obtained from homeless female youth (n = 72, M = 17.5 years) accessing an emergency shelter in a large Canadian urban centre and a comparison group of housed females (n = 102 ; M = 17.2 years) from local high schools who had never resided in a shelter. Results: Homeless youth reported lower self-worth, increased suicidal behaviour, less perceived parental support and higher levels of depressive symptoms and both internalizing and externalizing behaviour problems than housed youth. Hierarchical regression analyses indicated that disengagement coping was a significant predictor of depressive symptoms and both internalizing and externalizing behaviour problems in homeless and housed youth. Conclusions: Findings reflect the merit of considering coping style, parental support and self-worth in the presentation of depressive symptoms and behaviour problems in homeless and housed female youth. PMID:19495433
Adjusting Performance Measures to Ensure Equitable Plan Comparisons
Zaslavsky, Alan M.; Zaborski, Lawrence B.; Ding, Lin; Shaul, James A.; Cioffi, Matthew J.; Cleary, Paul D.
2001-01-01
When comparing health plans on scores from the Medicare Managed Care Consumer Assessment of Health Plans (MMC-CAHPS®) survey, the results should be adjusted for patient characteristics, not under the control of health plans, that might affect survey results. We developed an adjustment model that uses self-reported measures of health status, age, education, and whether someone helped the respondent with the questionnaire. The associations of health and education with survey responses differed by HCFA administrative region. Consequently, we recommend that the case-mix model include regional interactions. Analyses of the impact of adjustment show that the adjustments were usually small but not negligible. PMID:25372572
Bias associated with using the estimated propensity score as a regression covariate.
Hade, Erinn M; Lu, Bo
2014-01-15
The use of propensity score methods to adjust for selection bias in observational studies has become increasingly popular in public health and medical research. A substantial portion of studies using propensity score adjustment treat the propensity score as a conventional regression predictor. Through a Monte Carlo simulation study, Austin and colleagues. investigated the bias associated with treatment effect estimation when the propensity score is used as a covariate in nonlinear regression models, such as logistic regression and Cox proportional hazards models. We show that the bias exists even in a linear regression model when the estimated propensity score is used and derive the explicit form of the bias. We also conduct an extensive simulation study to compare the performance of such covariate adjustment with propensity score stratification, propensity score matching, inverse probability of treatment weighted method, and nonparametric functional estimation using splines. The simulation scenarios are designed to reflect real data analysis practice. Instead of specifying a known parametric propensity score model, we generate the data by considering various degrees of overlap of the covariate distributions between treated and control groups. Propensity score matching excels when the treated group is contained within a larger control pool, while the model-based adjustment may have an edge when treated and control groups do not have too much overlap. Overall, adjusting for the propensity score through stratification or matching followed by regression or using splines, appears to be a good practical strategy.
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki
2013-01-01
The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical
Regression analysis of cytopathological data
Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.
1982-12-01
Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.
The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...
Multiatlas segmentation as nonparametric regression.
Awate, Suyash P; Whitaker, Ross T
2014-09-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.
Multiatlas Segmentation as Nonparametric Regression
Awate, Suyash P.; Whitaker, Ross T.
2015-01-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528
Hunter, Paul R
2009-12-01
Household water treatment (HWT) is being widely promoted as an appropriate intervention for reducing the burden of waterborne disease in poor communities in developing countries. A recent study has raised concerns about the effectiveness of HWT, in part because of concerns over the lack of blinding and in part because of considerable heterogeneity in the reported effectiveness of randomized controlled trials. This study set out to attempt to investigate the causes of this heterogeneity and so identify factors associated with good health gains. Studies identified in an earlier systematic review and meta-analysis were supplemented with more recently published randomized controlled trials. A total of 28 separate studies of randomized controlled trials of HWT with 39 intervention arms were included in the analysis. Heterogeneity was studied using the "metareg" command in Stata. Initial analyses with single candidate predictors were undertaken and all variables significant at the P < 0.2 level were included in a final regression model. Further analyses were done to estimate the effect of the interventions over time by MonteCarlo modeling using @Risk and the parameter estimates from the final regression model. The overall effect size of all unblinded studies was relative risk = 0.56 (95% confidence intervals 0.51-0.63), but after adjusting for bias due to lack of blinding the effect size was much lower (RR = 0.85, 95% CI = 0.76-0.97). Four main variables were significant predictors of effectiveness of intervention in a multipredictor meta regression model: Log duration of study follow-up (regression coefficient of log effect size = 0.186, standard error (SE) = 0.072), whether or not the study was blinded (coefficient 0.251, SE 0.066) and being conducted in an emergency setting (coefficient -0.351, SE 0.076) were all significant predictors of effect size in the final model. Compared to the ceramic filter all other interventions were much less effective (Biosand 0.247, 0
New Parents' Psychological Adjustment and Trajectories of Early Parental Involvement.
Jia, Rongfang; Kotila, Letitia E; Schoppe-Sullivan, Sarah J; Kamp Dush, Claire M
2016-02-01
Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents' own and their partners' psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of "mother step in" and "father step out," as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners' psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment.
New Parents’ Psychological Adjustment and Trajectories of Early Parental Involvement
Jia, Rongfang; Kotila, Letitia E.; Schoppe-Sullivan, Sarah J.; Kamp Dush, Claire M.
2016-01-01
Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents’ own and their partners’ psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of “mother step in” and “father step out,” as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners’ psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment. PMID:27397935
Spatial regression analysis on 32 years of total column ozone data
NASA Astrophysics Data System (ADS)
Knibbe, J. S.; van der A, R. J.; de Laat, A. T. J.
2014-08-01
Multiple-regression analyses have been performed on 32 years of total ozone column data that was spatially gridded with a 1 × 1.5° resolution. The total ozone data consist of the MSR (Multi Sensor Reanalysis; 1979-2008) and 2 years of assimilated SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) ozone data (2009-2010). The two-dimensionality in this data set allows us to perform the regressions locally and investigate spatial patterns of regression coefficients and their explanatory power. Seasonal dependencies of ozone on regressors are included in the analysis. A new physically oriented model is developed to parameterize stratospheric ozone. Ozone variations on nonseasonal timescales are parameterized by explanatory variables describing the solar cycle, stratospheric aerosols, the quasi-biennial oscillation (QBO), El Niño-Southern Oscillation (ENSO) and stratospheric alternative halogens which are parameterized by the effective equivalent stratospheric chlorine (EESC). For several explanatory variables, seasonally adjusted versions of these explanatory variables are constructed to account for the difference in their effect on ozone throughout the year. To account for seasonal variation in ozone, explanatory variables describing the polar vortex, geopotential height, potential vorticity and average day length are included. Results of this regression model are compared to that of a similar analysis based on a more commonly applied statistically oriented model. The physically oriented model provides spatial patterns in the regression results for each explanatory variable. The EESC has a significant depleting effect on ozone at mid- and high latitudes, the solar cycle affects ozone positively mostly in the Southern Hemisphere, stratospheric aerosols affect ozone negatively at high northern latitudes, the effect of QBO is positive and negative in the tropics and mid- to high latitudes, respectively, and ENSO affects ozone negatively
Haus, Frédérique; Boissel, Olivier; Junter, Guy Alain
2003-02-01
A set of 38 mineral base oils was characterized by a number of chemical (i.e., overall chemical composition) and physical parameters used routinely in industry. Their primary biodegradability was evaluated using the CEC L-33-A-93 test. Multiple (stepwise) linear regression (MLR) analyses were performed to describe the relationships between the biodegradability values and the chemical or physical properties of oils. Chemical, physical, and both types of parameters were successively used as independent variables. Using chemical descriptors as variables, a four-variable model equation was obtained that explained only 68.2% (adjusted R-squared statistic=68.2%) of the variability in biodegradability. The fitting was improved by using either the physical or the whole parameters as variables. MLR analyses led to three-descriptor model equations involving kinematic viscosity (as log), Noack volatility (as log) and either the viscosity index (pure physical model) or the paraffinic carbon percentage (mixed chemical-physical model). These two models displayed very similar adjusted R-squared statistics, of approximately 91%. Their predicting ability was verified using 25 additional base oils or oil blends. For 80% of oils on a total of 63, the absolute percentage error on biodegradability predicted by either model was lower than 20%. Kinematic viscosity was by far the most influential parameter in the two models.
Goossens, Mariëlle E; Kindermans, Hanne P; Morley, Stephen J; Roelofs, Jeffrey; Verbunt, Jeanine; Vlaeyen, Johan W
2010-08-01
Recurrent pain not only has an impact on disability, but on the long term it may become a threat to one's sense of self. This paper presents a cross-sectional study of patients with work-related upper extremity pain and focuses on: (1) the role of self-discrepancies in this group, (2) the associations between self-discrepancies, pain, emotions and (3) the interaction between self-discrepancies and flexible-goal adjustment. Eighty-nine participants completed standardized self-report measures of pain intensity, pain duration, anxiety, depression and flexible-goal adjustment. A Selves Questionnaire was used to generate self-discrepancies. A series of hierarchical regression analyses showed relationships between actual-ought other, actual-ought self, actual-feared self-discrepancies and depression as well as a significant association between actual-ought other self-discrepancy and anxiety. Furthermore, significant interactions were found between actual-ought other self-discrepancies and flexibility, indicating that less flexible participants with large self-discrepancies score higher on depression. This study showed that self-discrepancies are related to negative emotions and that flexible-goal adjustment served as a moderator in this relationship. The view of self in pain and flexible-goal adjustment should be considered as important variables in the process of chronic pain.
Langberg, Joshua M; Dvorsky, Melissa R; Kipperman, Kristen L; Molitor, Stephen J; Eddy, Laura D
2015-06-01
The primary aim of this study was to evaluate whether alcohol consumption longitudinally predicts the adjustment, overall functioning, and grade point average (GPA) of college students with ADHD and to determine whether self-report of executive functioning (EF) mediates these relationships. Sixty-two college students comprehensively diagnosed with ADHD completed ratings at the beginning and end of the school year. Regression analyses revealed that alcohol consumption rated at the beginning of the year significantly predicted self-report of adjustment and overall impairment at the end of the year, above and beyond ADHD symptoms and baseline levels of adjustment/impairment but did not predict GPA. Exploratory multiple mediator analyses suggest that alcohol use impacts impairment primarily through EF deficits in self-motivation. EF deficits in the motivation to refrain from pursuing immediately rewarding behaviors in order to work toward long-term goals appear to be particularly important in understanding why college students with ADHD who consume alcohol have a higher likelihood of experiencing significant negative outcomes. The implications of these findings for the prevention of the negative functional outcomes often experienced by college students with ADHD are discussed. (PsycINFO Database Record
Recognition of caudal regression syndrome.
Boulas, Mari M
2009-04-01
Caudal regression syndrome, also referred to as caudal dysplasia and sacral agenesis syndrome, is a rare congenital malformation characterized by varying degrees of developmental failure early in gestation. It involves the lower extremities, the lumbar and coccygeal vertebrae, and corresponding segments of the spinal cord. This is a rare disorder, and true pathogenesis is unclear. The etiology is thought to be related to maternal diabetes, genetic predisposition, and vascular hypoperfusion, but no true causative factor has been determined. Fetal diagnostic tools allow for early recognition of the syndrome, and careful examination of the newborn is essential to determine the extent of the disorder. Associated organ system dysfunction depends on the severity of the disease. Related defects are structural, and systematic problems including respiratory, cardiac, gastrointestinal, urinary, orthopedic, and neurologic can be present in varying degrees of severity and in different combinations. A multidisciplinary approach to management is crucial. Because the primary pathology is irreversible, treatment is only supportive.
Practical Session: Multiple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
Lumbar herniated disc: spontaneous regression
Yüksel, Kasım Zafer
2017-01-01
Background Low back pain is a frequent condition that results in substantial disability and causes admission of patients to neurosurgery clinics. To evaluate and present the therapeutic outcomes in lumbar disc hernia (LDH) patients treated by means of a conservative approach, consisting of bed rest and medical therapy. Methods This retrospective cohort was carried out in the neurosurgery departments of hospitals in Kahramanmaraş city and 23 patients diagnosed with LDH at the levels of L3−L4, L4−L5 or L5−S1 were enrolled. Results The average age was 38.4 ± 8.0 and the chief complaint was low back pain and sciatica radiating to one or both lower extremities. Conservative treatment was administered. Neurological examination findings, durations of treatment and intervals until symptomatic recovery were recorded. Laségue tests and neurosensory examination revealed that mild neurological deficits existed in 16 of our patients. Previously, 5 patients had received physiotherapy and 7 patients had been on medical treatment. The number of patients with LDH at the level of L3−L4, L4−L5, and L5−S1 were 1, 13, and 9, respectively. All patients reported that they had benefit from medical treatment and bed rest, and radiologic improvement was observed simultaneously on MRI scans. The average duration until symptomatic recovery and/or regression of LDH symptoms was 13.6 ± 5.4 months (range: 5−22). Conclusions It should be kept in mind that lumbar disc hernias could regress with medical treatment and rest without surgery, and there should be an awareness that these patients could recover radiologically. This condition must be taken into account during decision making for surgical intervention in LDH patients devoid of indications for emergent surgery. PMID:28119770
This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.
Nonlinear Hydrostatic Adjustment.
NASA Astrophysics Data System (ADS)
Bannon, Peter R.
1996-12-01
The final equilibrium state of Lamb's hydrostatic adjustment problem is found for finite amplitude heating. Lamb's problem consists of the response of a compressible atmosphere to an instantaneous, horizontally homogeneous heating. Results are presented for both isothermal and nonisothermal atmospheres.As in the linear problem, the fluid displacements are confined to the heated layer and to the region aloft with no displacement of the fluid below the heating. The region above the heating is displaced uniformly upward for heating and downward for cooling. The amplitudes of the displacements are larger for cooling than for warming.Examination of the energetics reveals that the fraction of the heat deposited into the acoustic modes increases linearly with the amplitude of the heating. This fraction is typically small (e.g., 0.06% for a uniform warming of 1 K) and is essentially independent of the lapse rate of the base-state atmosphere. In contrast a fixed fraction of the available energy generated by the heating goes into the acoustic modes. This fraction (e.g., 12% for a standard tropospheric lapse rate) agrees with the linear result and increases with increasing stability of the base-state atmosphere.The compressible results are compared to solutions using various forms of the soundproof equations. None of the soundproof equations predict the finite amplitude solutions accurately. However, in the small amplitude limit, only the equations for deep convection advanced by Dutton and Fichtl predict the thermodynamic state variables accurately for a nonisothermal base-state atmosphere.
Exercise as a treatment for depression: A meta-analysis adjusting for publication bias.
Schuch, Felipe B; Vancampfort, Davy; Richards, Justin; Rosenbaum, Simon; Ward, Philip B; Stubbs, Brendon
2016-06-01
The effects of exercise on depression have been a source of contentious debate. Meta-analyses have demonstrated a range of effect sizes. Both inclusion criteria and heterogeneity may influence the effect sizes reported. The extent and influence of publication bias is also unknown. Randomized controlled trials (RCTs) were identified from a recent Cochrane review and searches of major electronic databases from 01/2013 to 08/2015. We included RCTs of exercise interventions in people with depression (including those with a diagnosis of major depressive disorder (MDD) or ratings on depressive symptoms), comparing exercise versus control conditions. A random effects meta-analysis calculating the standardized mean difference (SMD, 95% confidence interval; CI), meta-regressions, trim and fill and fail-safe n analyses were conducted. Twenty-five RCTs were included comparing exercise versus control comparison groups, including 9 examining participants with MDD. Overall, exercise had a large and significant effect on depression (SMD adjusted for publication bias = 1.11 (95% CI 0.79-1.43)) with a fail-safe number of 1057. Most adjusted analyses suggested publication bias led to an underestimated SMD. Larger effects were found for interventions in MDD, utilising aerobic exercise, at moderate and vigorous intensities, in a supervised and unsupervised format. In MDD, larger effects were found for moderate intensity, aerobic exercise, and interventions supervised by exercise professionals. Exercise has a large and significant antidepressant effect in people with depression (including MDD). Previous meta-analyses may have underestimated the benefits of exercise due to publication bias. Our data strongly support the claim that exercise is an evidence-based treatment for depression.
Geomorphic analyses from space imagery
NASA Technical Reports Server (NTRS)
Morisawa, M.
1985-01-01
One of the most obvious applications of space imagery to geomorphological analyses is in the study of drainage patterns and channel networks. LANDSAT, high altitude photography and other types of remote sensing imagery are excellent for depicting stream networks on a regional scale because of their broad coverage in a single image. They offer a valuable tool for comparing and analyzing drainage patterns and channel networks all over the world. Three aspects considered in this geomorphological study are: (1) the origin, evolution and rates of development of drainage systems; (2) the topological studies of network and channel arrangements; and (3) the adjustment of streams to tectonic events and geologic structure (i.e., the mode and rate of adjustment).
Model building strategy for logistic regression: purposeful selection.
Zhang, Zhongheng
2016-03-01
Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.
Genetics Home Reference: caudal regression syndrome
... of a genetic condition? Genetic and Rare Diseases Information Center Frequency Caudal regression syndrome is estimated to occur in 1 to ... parts of the skeleton, gastrointestinal system, and genitourinary ... caudal regression syndrome results from the presence of an abnormal ...
Regressions during reading: The cost depends on the cause.
Eskenazi, Michael A; Folk, Jocelyn R
2016-11-21
The direction and duration of eye movements during reading is predominantly determined by cognitive and linguistic processing, but some low-level oculomotor effects also influence the duration and direction of eye movements. One such effect is inhibition of return (IOR), which results in an increased latency to return attention to a target that has been previously attended (Posner & Cohen, Attention and Performance X: Control of Language Processes, 32, 531-556, 1984). Although this is a low level effect, it has also been found in the complex task of reading (Henderson & Luke, Psychonomic Bulletin & Review, 19(6), 1101-1107, 2012; Rayner, Juhasz, Ashby, & Clifton, Vision Research, 43(9), 1027-1034, 2003). The purpose of the current study was to isolate the potentially different causes of regressive eye movements: to adjust for oculomotor error and to assist with comprehension difficulties. We found that readers demonstrated an IOR effect when regressions were caused by oculomotor error, but not when regressions were caused by comprehension difficulties. The results suggest that IOR is primarily associated with low-level oculomotor control of eye movements, and that regressive eye movements that are controlled by comprehension processes are not subject to IOR effects. The results have implications for understanding the relationship between oculomotor and cognitive control of eye movements and for models of eye movement control.
Semiparametric regression during 2003–2007*
Ruppert, David; Wand, M.P.; Carroll, Raymond J.
2010-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800
A statistical test for the equality of differently adjusted incidence rate ratios.
Hoffmann, Kurt; Pischon, Tobias; Schulz, Mandy; Schulze, Matthias B; Ray, Jennifer; Boeing, Heiner
2008-03-01
An incidence rate ratio (IRR) is a meaningful effect measure in epidemiology if it is adjusted for all important confounders. For evaluation of the impact of adjustment, adjusted IRRs should be compared with crude IRRs. The aim of this methodological study was to present a statistical approach for testing the equality of adjusted and crude IRRs and to derive a confidence interval for the ratio of the two IRRs. The method can be extended to compare two differently adjusted IRRs and, thus, to evaluate the effect of additional adjustment. The method runs immediately on existing software. To illustrate the application of this approach, the authors studied adjusted IRRs for two risk factors of type 2 diabetes using data from the European Prospective Investigation into Cancer and Nutrition-Potsdam Study from 2005. The statistical method described may be helpful as an additional tool for analyzing epidemiologic cohort data and for interpreting results obtained from Cox regression models with adjustment for different covariates.
2011-01-01
Background Although the validity and safety of antipsychotic polypharmacy remains unclear, it is commonplace in the treatment of schizophrenia. This study aimed to investigate the degree that antipsychotic polypharmacy contributed to metabolic syndrome in outpatients with schizophrenia, after adjustment for the effects of lifestyle. Methods A cross-sectional survey was carried out between April 2007 and October 2007 at Yamanashi Prefectural KITA hospital in Japan. 334 patients consented to this cross-sectional study. We measured the components consisting metabolic syndrome, and interviewed the participants about their lifestyle. We classified metabolic syndrome into four groups according to the severity of metabolic disturbance: the metabolic syndrome; the pre-metabolic syndrome; the visceral fat obesity; and the normal group. We used multinomial logistic regression models to assess the association of metabolic syndrome with antipsychotic polypharmacy, adjusting for lifestyle. Results Seventy-four (22.2%) patients were in the metabolic syndrome group, 61 (18.3%) patients were in the pre-metabolic syndrome group, and 41 (12.3%) patients were in visceral fat obesity group. Antipsychotic polypharmacy was present in 167 (50.0%) patients. In multinomial logistic regression analyses, antipsychotic polypharmacy was significantly associated with the pre-metabolic syndrome group (adjusted odds ratio [AOR], 2.348; 95% confidence interval [CI], 1.181-4.668), but not with the metabolic syndrome group (AOR, 1.269; 95%CI, 0.679-2.371). Conclusions These results suggest that antipsychotic polypharmacy, compared with monotherapy, may be independently associated with an increased risk of having pre-metabolic syndrome, even after adjusting for patients' lifestyle characteristics. As metabolic syndrome is associated with an increased risk of cardiovascular mortality, further studies are needed to clarify the validity and safety of antipsychotic polypharmacy. PMID:21791046
Survival Regression Modeling Strategies in CVD Prediction
Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-01-01
Background A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. Objectives User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. Materials and Methods We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D’Agostino X2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham’s general CVD risk algorithm. Results The command is adpredsurv for survival models. Conclusions Herein we have described the Stata package “adpredsurv” for calculation of the Nam-D’Agostino X2 goodness of fit test as well as cut point-free and cut point-based NRI, relative
Developmental Regression in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Rogers, Sally J.
2004-01-01
The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Regression Analysis by Example. 5th Edition
ERIC Educational Resources Information Center
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Synthesizing Regression Results: A Factored Likelihood Method
ERIC Educational Resources Information Center
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Nonlinear Theory of The Geostrophic Adjustment
NASA Astrophysics Data System (ADS)
Zeitlin, V.
Nonlinear geostrophic adjustment and splitting of the fast and slow dynamical vari- ables are analysed in the framework of multi-layer and continuously stratified prim- itive equations by means of the multi-scale perturbation theory in the Rossby num- ber applied to localized initial disturbances. Two basic dynamical regimes: the quasi- geostrophic (QG) and the frontal geostrophic (FG) with small and large deviations of the isopycnal surfaces, respectively, are considered and differences in corresponding adjustment scenarios are displayed. Decoupling of the fast component of the flow is proven up to the third order in Rossby number and long-time corrections to the stan- dard balanced QG and FG models are found. Peculiarities of splitting in the FG regime due to the quasi-inertial oscillations are displayed and a Schrodinger-like modulation equations for the envelope of these latter are derived.
Selapa, N W; Nephawe, K A; Maiwashe, A; Norris, D
2012-02-08
The aim of this study was to estimate genetic parameters for body weights of individually fed beef bulls measured at centralized testing stations in South Africa using random regression models. Weekly body weights of Bonsmara bulls (N = 2919) tested between 1999 and 2003 were available for the analyses. The model included a fixed regression of the body weights on fourth-order orthogonal Legendre polynomials of the actual days on test (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, and 84) for starting age and contemporary group effects. Random regressions on fourth-order orthogonal Legendre polynomials of the actual days on test were included for additive genetic effects and additional uncorrelated random effects of the weaning-herd-year and the permanent environment of the animal. Residual effects were assumed to be independently distributed with heterogeneous variance for each test day. Variance ratios for additive genetic, permanent environment and weaning-herd-year for weekly body weights at different test days ranged from 0.26 to 0.29, 0.37 to 0.44 and 0.26 to 0.34, respectively. The weaning-herd-year was found to have a significant effect on the variation of body weights of bulls despite a 28-day adjustment period. Genetic correlations amongst body weights at different test days were high, ranging from 0.89 to 1.00. Heritability estimates were comparable to literature using multivariate models. Therefore, random regression model could be applied in the genetic evaluation of body weight of individually fed beef bulls in South Africa.
Erdmann, Christine A.; Steiner, Kate C.; Apte, Michael G.
2002-02-01
In previously published analyses of the 41-building 1994-1996 USEPA Building Assessment Survey and Evaluation (BASE) dataset, higher workday time-averaged indoor minus outdoor CO{sub 2} concentrations (dCO{sub 2}) were associated with increased prevalence of certain mucous membrane and lower respiratory sick building syndrome (SBS) symptoms, even at peak dCO{sub 2} concentrations below 1,000 ppm. For this paper, similar analyses were performed using the larger 100-building 1994-1998 BASE dataset. Multivariate logistic regression analyses quantified the associations between dCO{sub 2} and the SBS symptoms, adjusting for age, sex, smoking status, presence of carpet in workspace, thermal exposure, relative humidity, and a marker for entrained automobile exhaust. Adjusted dCO{sub 2} prevalence odds ratios for sore throat and wheeze were 1.17 and 1.20 per 100-ppm increase in dCO{sub 2} (p <0.05), respectively. These new analyses generally support our prior findings. Regional differences in climate, building design, and operation may account for some of the differences observed in analyses of the two datasets.
Wolf, Kathrin; Cyrys, Josef; Harciníková, Tatiana; Gu, Jianwei; Kusch, Thomas; Hampel, Regina; Schneider, Alexandra; Peters, Annette
2017-02-01
Important health relevance has been suggested for ultrafine particles (UFP) and ozone, but studies on long-term effects are scarce, mainly due to the lack of appropriate spatial exposure models. We designed a measurement campaign to develop land use regression (LUR) models to predict the spatial variability focusing on particle number concentration (PNC) as indicator for UFP, ozone and several other air pollutants in the Augsburg region, Southern Germany. Three bi-weekly measurements of PNC, ozone, particulate matter (PM10, PM2.5), soot (PM2.5abs) and nitrogen oxides (NOx, NO2) were performed at 20 sites in 2014/15. Annual average concentration were calculated and temporally adjusted by measurements from a continuous background station. As geographic predictors we offered several traffic and land use variables, altitude, population and building density. Models were validated using leave-one-out cross-validation. Adjusted model explained variance (R(2)) was high for PNC and ozone (0.89 and 0.88). Cross-validation adjusted R(2) was slightly lower (0.82 and 0.81) but still indicated a very good fit. LUR models for other pollutants performed well with adjusted R(2) between 0.68 (PMcoarse) and 0.94 (NO2). Contrary to previous studies, ozone showed a moderate correlation with NO2 (Pearson's r=-0.26). PNC was moderately correlated with ozone and PM2.5, but highly correlated with NOx (r=0.91). For PNC and NOx, LUR models comprised similar predictors and future epidemiological analyses evaluating health effects need to consider these similarities.
Kala, Abhishek K.; Tiwari, Chetan; Mikler, Armin R.
2017-01-01
Background The primary aim of the study reported here was to determine the effectiveness of utilizing local spatial variations in environmental data to uncover the statistical relationships between West Nile Virus (WNV) risk and environmental factors. Because least squares regression methods do not account for spatial autocorrelation and non-stationarity of the type of spatial data analyzed for studies that explore the relationship between WNV and environmental determinants, we hypothesized that a geographically weighted regression model would help us better understand how environmental factors are related to WNV risk patterns without the confounding effects of spatial non-stationarity. Methods We examined commonly mapped environmental factors using both ordinary least squares regression (LSR) and geographically weighted regression (GWR). Both types of models were applied to examine the relationship between WNV-infected dead bird counts and various environmental factors for those locations. The goal was to determine which approach yielded a better predictive model. Results LSR efforts lead to identifying three environmental variables that were statistically significantly related to WNV infected dead birds (adjusted R2 = 0.61): stream density, road density, and land surface temperature. GWR efforts increased the explanatory value of these three environmental variables with better spatial precision (adjusted R2 = 0.71). Conclusions The spatial granularity resulting from the geographically weighted approach provides a better understanding of how environmental spatial heterogeneity is related to WNV risk as implied by WNV infected dead birds, which should allow improved planning of public health management strategies. PMID:28367364
Developmental regression in autism spectrum disorder.
Al Backer, Nouf Backer
2015-01-01
The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear.
A Tutorial on Calculating and Interpreting Regression Coefficients in Health Behavior Research
ERIC Educational Resources Information Center
Stellefson, Michael L.; Hanik, Bruce W.; Chaney, Beth H.; Chaney, J. Don
2008-01-01
Regression analyses are frequently employed by health educators who conduct empirical research examining a variety of health behaviors. Within regression, there are a variety of coefficients produced, which are not always easily understood and/or articulated by health education researchers. It is important to not only understand what these…
Quantile regression provides a fuller analysis of speed data.
Hewson, Paul
2008-03-01
Considerable interest already exists in terms of assessing percentiles of speed distributions, for example monitoring the 85th percentile speed is a common feature of the investigation of many road safety interventions. However, unlike the mean, where t-tests and ANOVA can be used to provide evidence of a statistically significant change, inference on these percentiles is much less common. This paper examines the potential role of quantile regression for modelling the 85th percentile, or any other quantile. Given that crash risk may increase disproportionately with increasing relative speed, it may be argued these quantiles are of more interest than the conditional mean. In common with the more usual linear regression, quantile regression admits a simple test as to whether the 85th percentile speed has changed following an intervention in an analogous way to using the t-test to determine if the mean speed has changed by considering the significance of parameters fitted to a design matrix. Having briefly outlined the technique and briefly examined an application with a widely published dataset concerning speed measurements taken around the introduction of signs in Cambridgeshire, this paper will demonstrate the potential for quantile regression modelling by examining recent data from Northamptonshire collected in conjunction with a "community speed watch" programme. Freely available software is used to fit these models and it is hoped that the potential benefits of using quantile regression methods when examining and analysing speed data are demonstrated.
Use of probabilistic weights to enhance linear regression myoelectric control
NASA Astrophysics Data System (ADS)
Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.
2015-12-01
Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.
Spousal Adjustment to Myocardial Infarction.
ERIC Educational Resources Information Center
Ziglar, Elisa J.
This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…
Childhood peer relationship problems and psychosocial adjustment in late adolescence.
Woodward, L J; Fergusson, D M
1999-02-01
Using prospective longitudinal data from the Christchurch Health and Development Study, this paper examined the relationship between teacher reported peer relationship problems at age 9 and psychosocial adjustment in late adolescence. Results showed that, by age 18, children with high rates of early peer relationship problems were at increased risk of externalizing behavior problems such as criminal offending and substance abuse, but were not at increased risk of anxiety disorder or major depression. Subsequent analyses revealed that these associations were largely explained by the effects of child and family factors associated with both early peer relationship problems and later adjustment. The most influential variable in explaining associations between peer relationship problems and later adjustment was the extent of children's early conduct problems. These results suggest that reported associations between early peer problems and later adjustment are noncausal, and appear to reflect underlying continuities in behavioral adjustment.
ERIC Educational Resources Information Center
Martin, Mary P.; Williams, John D.
1978-01-01
A series of multiple regression analyses was used to investigate a salary equity policy in a statewide system of institutions of higher education. Faculty rank, number of publications, and teaching effectiveness were among the variables examined. (JKS)
Rastegari, Azam; Haghdoost, Ali Akbar; Baneshi, Mohammad Reza
2013-01-01
Background Due to the importance of medical studies, researchers of this field should be familiar with various types of statistical analyses to select the most appropriate method based on the characteristics of their data sets. Classification and regression trees (CARTs) can be as complementary to regression models. We compared the performance of a logistic regression model and a CART in predicting drug injection among prisoners. Methods Data of 2720 Iranian prisoners was studied to determine the factors influencing drug injection. The collected data was divided into two groups of training and testing. A logistic regression model and a CART were applied on training data. The performance of the two models was then evaluated on testing data. Findings The regression model and the CART had 8 and 4 significant variables, respectively. Overall, heroin use, history of imprisonment, age at first drug use, and marital status were important factors in determining the history of drug injection. Subjects without the history of heroin use or heroin users with short-term imprisonment were at lower risk of drug injection. Among heroin addicts with long-term imprisonment, individuals with higher age at first drug use and married subjects were at lower risk of drug injection. Although the logistic regression model was more sensitive than the CART, the two models had the same levels of specificity and classification accuracy. Conclusion In this study, both sensitivity and specificity were important. While the logistic regression model had better performance, the graphical presentation of the CART simplifies the interpretation of the results. In general, a combination of different analytical methods is recommended to explore the effects of variables. PMID:24494152
Estimating effects of limiting factors with regression quantiles
Cade, B.S.; Terrell, J.W.; Schroeder, R.L.
1999-01-01
In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e
Bakhtiyari, Mahmood; Mehmandar, Mohammad Reza; Mirbagheri, Babak; Hariri, Gholam Reza; Delpisheh, Ali; Soori, Hamid
2014-01-01
Risk factors of human-related traffic crashes are the most important and preventable challenges for community health due to their noteworthy burden in developing countries in particular. The present study aims to investigate the role of human risk factors of road traffic crashes in Iran. Through a cross-sectional study using the COM 114 data collection forms, the police records of almost 600,000 crashes occurred in 2010 are investigated. The binary logistic regression and proportional odds regression models are used. The odds ratio for each risk factor is calculated. These models are adjusted for known confounding factors including age, sex and driving time. The traffic crash reports of 537,688 men (90.8%) and 54,480 women (9.2%) are analysed. The mean age is 34.1 ± 14 years. Not maintaining eyes on the road (53.7%) and losing control of the vehicle (21.4%) are the main causes of drivers' deaths in traffic crashes within cities. Not maintaining eyes on the road is also the most frequent human risk factor for road traffic crashes out of cities. Sudden lane excursion (OR = 9.9, 95% CI: 8.2-11.9) and seat belt non-compliance (OR = 8.7, CI: 6.7-10.1), exceeding authorised speed (OR = 17.9, CI: 12.7-25.1) and exceeding safe speed (OR = 9.7, CI: 7.2-13.2) are the most significant human risk factors for traffic crashes in Iran. The high mortality rate of 39 people for every 100,000 population emphasises on the importance of traffic crashes in Iran. Considering the important role of human risk factors in traffic crashes, struggling efforts are required to control dangerous driving behaviours such as exceeding speed, illegal overtaking and not maintaining eyes on the road.
Predictors of psychiatric disorders in liver transplantation candidates: logistic regression models.
Rocca, Paola; Cocuzza, Elena; Rasetti, Roberta; Rocca, Giuseppe; Zanalda, Enrico; Bogetto, Filippo
2003-07-01
This study has two goals. The first goal is to assess the prevalence of psychiatric disorders in orthotopic liver transplantation (OLT) candidates by means of standardized procedures because there has been little research concerning psychiatric problems of potential OLT candidates using standardized instruments. The second goal focuses on identifying predictors of these psychiatric disorders. One hundred sixty-five elective OLT candidates were assessed by our unit. Psychiatric diagnoses were based on the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. Patients also were assessed using the Hamilton Depression Rating Scale (HDRS) and the Spielberger Anxiety Index, State and Trait forms (STAI-X1 and STAI-X2). Severity of cirrhosis was assessed by applying Child-Pugh score criteria. Chi-squared and general linear model analysis of variance were used to test the univariate association between patient characteristics and both clinical psychiatric diagnoses and severity of psychiatric diseases. Variables with P less than.10 in univariate analyses were included in multiple regression models. Forty-three percent of patients presented at least one psychiatric diagnosis. Child-Pugh score and previous psychiatric diagnoses were independent significant predictors of depressive disorders. Severity of psychiatric symptoms measured by psychometric scales (HDRS, STAI-X1, and STAI-X2) was associated with Child-Pugh score in the multiple regression model. Our data suggest a high rate of psychiatric disorders, particularly adjustment disorders, in our sample of OLT candidates. Severity of liver disease emerges as the most important variable in predicting severity of psychiatric disorders in these patients.
Kocalevent, Rüya-Daniela; Mierke, Annett; Danzer, Gerhard; Klapp, Burghard F.
2014-01-01
Objective Adjustment disorders are re-conceptualized in the DSM-5 as a stress-related disorder; however, besides the impact of an identifiable stressor, the specification of a stress concept, remains unclear. This study is the first to examine an existing stress-model from the general population, in patients diagnosed with adjustment disorders, using a longitudinal design. Methods The study sample consisted of 108 patients consecutively admitted for adjustment disorders. Associations of stress perception, emotional distress, resources, and mental health were measured at three time points: the outpatients’ presentation, admission for inpatient treatment, and discharge from the hospital. To evaluate a longitudinal stress model of ADs, we examined whether stress at admission predicted mental health at each of the three time points using multiple linear regressions and structural equation modeling. A series of repeated-measures one-way analyses of variance (rANOVAs) was performed to assess change over time. Results Significant within-participant changes from baseline were observed between hospital admission and discharge with regard to mental health, stress perception, and emotional distress (p<0.001). Stress perception explained nearly half of the total variance (44%) of mental health at baseline; the adjusted R2 increased (0.48), taking emotional distress (i.e., depressive symptoms) into account. The best predictor of mental health at discharge was the level of emotional distress (i.e., anxiety level) at baseline (β = −0.23, R2corr = 0.56, p<0.001). With a CFI of 0.86 and an NFI of 0.86, the fit indices did not allow for acceptance of the stress-model (Cmin/df = 15.26; RMSEA = 0.21). Conclusions Stress perception is an important predictor in adjustment disorders, and mental health-related treatment goals are dependent on and significantly impacted by stress perception and emotional distress. PMID:24825165
Process modeling with the regression network.
van der Walt, T; Barnard, E; van Deventer, J
1995-01-01
A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Geodesic least squares regression on information manifolds
Verdoolaege, Geert
2014-12-05
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.
The development of a flyover noise prediction technique using multiple linear regression analysis
NASA Astrophysics Data System (ADS)
Rathgeber, R. K.
1981-04-01
At Cessna Aircraft Company, statistical analyses have been developed to define important trends in flyover noise data. Multiple regression techniques have provided the means to develop flyover noise prediction methods which have resulted in better accuracy than methods used in the past. Regression analyses have been conducted to determine the important relationship between propeller helical tip Mach number and the flyover noise level. Other variables have been included in the regression models either because the added variable contributed to reducing the remaining variation in the model or the variable appeared to be a strong causal agent of flyover noise.
Adjustable Induction-Heating Coil
NASA Technical Reports Server (NTRS)
Ellis, Rod; Bartolotta, Paul
1990-01-01
Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.
Integrating Risk Adjustment and Enrollee Premiums in Health Plan Payment
McGuire, Thomas G.; Glazer, Jacob; Newhouse, Joseph P.; Normand, Sharon-Lise; Shi, Julie; Sinaiko, Anna D.; Zuvekas, Samuel
2013-01-01
In two important health policy contexts – private plans in Medicare and the new state-run “Exchanges” created as part of the Affordable Care Act (ACA) – plan payments come from two sources: risk-adjusted payments from a Regulator and premiums charged to individual enrollees. This paper derives principles for integrating risk-adjusted payments and premium policy in individual health insurance markets based on fitting total plan payments to health plan costs per person as closely as possible. A least squares regression including both health status and variables used in premiums reveals the weights a Regulator should put on risk adjusters when markets determine premiums. We apply the methods to an Exchange-eligible population drawn from the Medical Expenditure Panel Survey (MEPS). PMID:24308878
2014-01-01
Background Risk adjustment is crucial for comparison of outcome in medical care. Knowledge of the external factors that impact measured outcome but that cannot be influenced by the physician is a prerequisite for this adjustment. To date, a universal and reproducible method for identification of the relevant external factors has not been published. The selection of external factors in current quality assurance programmes is mainly based on expert opinion. We propose and demonstrate a methodology for identification of external factors requiring risk adjustment of outcome indicators and we apply it to a cataract surgery register. Methods Defined test criteria to determine the relevance for risk adjustment are “clinical relevance” and “statistical significance”. Clinical relevance of the association is presumed when observed success rates of the indicator in the presence and absence of the external factor exceed a pre-specified range of 10%. Statistical significance of the association between the external factor and outcome indicators is assessed by univariate stratification and multivariate logistic regression adjustment. The cataract surgery register was set up as part of a German multi-centre register trial for out-patient cataract surgery in three high-volume surgical sites. A total of 14,924 patient follow-ups have been documented since 2005. Eight external factors potentially relevant for risk adjustment were related to the outcome indicators “refractive accuracy” and “visual rehabilitation” 2–5 weeks after surgery. Results The clinical relevance criterion confirmed 2 (“refractive accuracy”) and 5 (“visual rehabilitation”) external factors. The significance criterion was verified in two ways. Univariate and multivariate analyses revealed almost identical external factors: 4 were related to “refractive accuracy” and 7 (6) to “visual rehabilitation”. Two (“refractive accuracy”) and 5 (“visual rehabilitation”) factors
Barros, Aluísio JD; Hirakata, Vânia N
2003-01-01
Background Cross-sectional studies with binary outcomes analyzed by logistic regression are frequent in the epidemiological literature. However, the odds ratio can importantly overestimate the prevalence ratio, the measure of choice in these studies. Also, controlling for confounding is not equivalent for the two measures. In this paper we explore alternatives for modeling data of such studies with techniques that directly estimate the prevalence ratio. Methods We compared Cox regression with constant time at risk, Poisson regression and log-binomial regression against the standard Mantel-Haenszel estimators. Models with robust variance estimators in Cox and Poisson regressions and variance corrected by the scale parameter in Poisson regression were also evaluated. Results Three outcomes, from a cross-sectional study carried out in Pelotas, Brazil, with different levels of prevalence were explored: weight-for-age deficit (4%), asthma (31%) and mother in a paid job (52%). Unadjusted Cox/Poisson regression and Poisson regression with scale parameter adjusted by deviance performed worst in terms of interval estimates. Poisson regression with scale parameter adjusted by χ2 showed variable performance depending on the outcome prevalence. Cox/Poisson regression with robust variance, and log-binomial regression performed equally well when the model was correctly specified. Conclusions Cox or Poisson regression with robust variance and log-binomial regression provide correct estimates and are a better alternative for the analysis of cross-sectional studies with binary outcomes than logistic regression, since the prevalence ratio is more interpretable and easier to communicate to non-specialists than the odds ratio. However, precautions are needed to avoid estimation problems in specific situations. PMID:14567763
Suppression Situations in Multiple Linear Regression
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…
Principles of Quantile Regression and an Application
ERIC Educational Resources Information Center
Chen, Fang; Chalhoub-Deville, Micheline
2014-01-01
Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…
Three-Dimensional Modeling in Linear Regression.
ERIC Educational Resources Information Center
Herman, James D.
Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…
A Practical Guide to Regression Discontinuity
ERIC Educational Resources Information Center
Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard
2012-01-01
Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…
Regression Analysis and the Sociological Imagination
ERIC Educational Resources Information Center
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
Atherosclerotic plaque regression: fact or fiction?
Shanmugam, Nesan; Román-Rego, Ana; Ong, Peter; Kaski, Juan Carlos
2010-08-01
Coronary artery disease is the major cause of death in the western world. The formation and rapid progression of atheromatous plaques can lead to serious cardiovascular events in patients with atherosclerosis. The better understanding, in recent years, of the mechanisms leading to atheromatous plaque growth and disruption and the availability of powerful HMG CoA-reductase inhibitors (statins) has permitted the consideration of plaque regression as a realistic therapeutic goal. This article reviews the existing evidence underpinning current therapeutic strategies aimed at achieving atherosclerotic plaque regression. In this review we also discuss imaging modalities for the assessment of plaque regression, predictors of regression and whether plaque regression is associated with a survival benefit.
Should metacognition be measured by logistic regression?
Rausch, Manuel; Zehetleitner, Michael
2017-03-01
Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria.
Higher order asymptotics for negative binomial regression inferences from RNA-sequencing data.
Di, Yanming; Emerson, Sarah C; Schafer, Daniel W; Kimbrel, Jeffrey A; Chang, Jeff H
2013-03-26
RNA sequencing (RNA-Seq) is the current method of choice for characterizing transcriptomes and quantifying gene expression changes. This next generation sequencing-based method provides unprecedented depth and resolution. The negative binomial (NB) probability distribution has been shown to be a useful model for frequencies of mapped RNA-Seq reads and consequently provides a basis for statistical analysis of gene expression. Negative binomial exact tests are available for two-group comparisons but do not extend to negative binomial regression analysis, which is important for examining gene expression as a function of explanatory variables and for adjusted group comparisons accounting for other factors. We address the adequacy of available large-sample tests for the small sample sizes typically available from RNA-Seq studies and consider a higher-order asymptotic (HOA) adjustment to likelihood ratio tests. We demonstrate that 1) the HOA-adjusted likelihood ratio test is practically indistinguishable from the exact test in situations where the exact test is available, 2) the type I error of the HOA test matches the nominal specification in regression settings we examined via simulation, and 3) the power of the likelihood ratio test does not appear to be affected by the HOA adjustment. This work helps clarify the accuracy of the unadjusted likelihood ratio test and the degree of improvement available with the HOA adjustment. Furthermore, the HOA test may be preferable even when the exact test is available because it does not require ad hoc library size adjustments.
Almost efficient estimation of relative risk regression
Fitzmaurice, Garrett M.; Lipsitz, Stuart R.; Arriaga, Alex; Sinha, Debajyoti; Greenberg, Caprice; Gawande, Atul A.
2014-01-01
Relative risks (RRs) are often considered the preferred measures of association in prospective studies, especially when the binary outcome of interest is common. In particular, many researchers regard RRs to be more intuitively interpretable than odds ratios. Although RR regression is a special case of generalized linear models, specifically with a log link function for the binomial (or Bernoulli) outcome, the resulting log-binomial regression does not respect the natural parameter constraints. Because log-binomial regression does not ensure that predicted probabilities are mapped to the [0,1] range, maximum likelihood (ML) estimation is often subject to numerical instability that leads to convergence problems. To circumvent these problems, a number of alternative approaches for estimating RR regression parameters have been proposed. One approach that has been widely studied is the use of Poisson regression estimating equations. The estimating equations for Poisson regression yield consistent, albeit inefficient, estimators of the RR regression parameters. We consider the relative efficiency of the Poisson regression estimator and develop an alternative, almost efficient estimator for the RR regression parameters. The proposed method uses near-optimal weights based on a Maclaurin series (Taylor series expanded around zero) approximation to the true Bernoulli or binomial weight function. This yields an almost efficient estimator while avoiding convergence problems. We examine the asymptotic relative efficiency of the proposed estimator for an increase in the number of terms in the series. Using simulations, we demonstrate the potential for convergence problems with standard ML estimation of the log-binomial regression model and illustrate how this is overcome using the proposed estimator. We apply the proposed estimator to a study of predictors of pre-operative use of beta blockers among patients undergoing colorectal surgery after diagnosis of colon cancer. PMID
Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi
2012-12-20
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Bi, Peng; Hiller, Janet
2008-01-01
This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.
Enticott, Joanne C; Cheng, I-Hao; Russell, Grant; Szwarc, Josef; Braitberg, George; Peek, Anne; Meadows, Graham
2015-01-01
This study investigated if people born in refugee source countries are disproportionately represented among those receiving a diagnosis of mental illness within emergency departments (EDs). The setting was the Cities of Greater Dandenong and Casey, the resettlement region for one-twelfth of Australia's refugees. An epidemiological, secondary data analysis compared mental illness diagnoses received in EDs by refugee and non-refugee populations. Data was the Victorian Emergency Minimum Dataset in the 2008-09 financial year. Univariate and multivariate logistic regression created predictive models for mental illness using five variables: age, sex, refugee background, interpreter use and preferred language. Collinearity, model fit and model stability were examined. Multivariate analysis showed age and sex to be the only significant risk factors for mental illness diagnosis in EDs. 'Refugee status', 'interpreter use' and 'preferred language' were not associatedwith a mental health diagnosis following risk adjustment forthe effects ofage and sex. The disappearance ofthe univariate association after adjustment for age and sex is a salutary lesson for Medicare Locals and other health planners regarding the importance of adjusting analyses of health service data for demographic characteristics.
Spatial regression analysis on 32 years total column ozone data
NASA Astrophysics Data System (ADS)
Knibbe, J. S.; van der A, R. J.; de Laat, A. T. J.
2014-02-01
Multiple-regressions analysis have been performed on 32 years of total ozone column data that was spatially gridded with a 1° × 1.5° resolution. The total ozone data consists of the MSR (Multi Sensor Reanalysis; 1979-2008) and two years of assimilated SCIAMACHY ozone data (2009-2010). The two-dimensionality in this data-set allows us to perform the regressions locally and investigate spatial patterns of regression coefficients and their explanatory power. Seasonal dependencies of ozone on regressors are included in the analysis. A new physically oriented model is developed to parameterize stratospheric ozone. Ozone variations on non-seasonal timescales are parameterized by explanatory variables describing the solar cycle, stratospheric aerosols, the quasi-biennial oscillation (QBO), El Nino (ENSO) and stratospheric alternative halogens (EESC). For several explanatory variables, seasonally adjusted versions of these explanatory variables are constructed to account for the difference in their effect on ozone throughout the year. To account for seasonal variation in ozone, explanatory variables describing the polar vortex, geopotential height, potential vorticity and average day length are included. Results of this regression model are compared to that of similar analysis based on a more commonly applied statistically oriented model. The physically oriented model provides spatial patterns in the regression results for each explanatory variable. The EESC has a significant depleting effect on ozone at high and mid-latitudes, the solar cycle affects ozone positively mostly at the Southern Hemisphere, stratospheric aerosols affect ozone negatively at high Northern latitudes, the effect of QBO is positive and negative at the tropics and mid to high-latitudes respectively and ENSO affects ozone negatively between 30° N and 30° S, particularly at the Pacific. The contribution of explanatory variables describing seasonal ozone variation is generally large at mid to high
ERIC Educational Resources Information Center
Shafiq, M. Najeeb
2013-01-01
Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15-year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…
Adjusting to Chronic Health Conditions.
Helgeson, Vicki S; Zajdel, Melissa
2017-01-03
Research on adjustment to chronic disease is critical in today's world, in which people are living longer lives, but lives are increasingly likely to be characterized by one or more chronic illnesses. Chronic illnesses may deteriorate, enter remission, or fluctuate, but their defining characteristic is that they persist. In this review, we first examine the effects of chronic disease on one's sense of self. Then we review categories of factors that influence how one adjusts to chronic illness, with particular emphasis on the impact of these factors on functional status and psychosocial adjustment. We begin with contextual factors, including demographic variables such as sex and race, as well as illness dimensions such as stigma and illness identity. We then examine a set of dispositional factors that influence chronic illness adjustment, organizing these into resilience and vulnerability factors. Resilience factors include cognitive adaptation indicators, personality variables, and benefit-finding. Vulnerability factors include a pessimistic attributional style, negative gender-related traits, and rumination. We then turn to social environmental variables, including both supportive and unsupportive interactions. Finally, we review chronic illness adjustment within the context of dyadic coping. We conclude by examining potential interactions among these classes of variables and outlining a set of directions for future research.
Vork, Kathleen L.; Broadwin, Rachel L.; Blaisdell, Robert J.
2007-01-01
Objective Studies have identified associations between household secondhand tobacco smoke (SHS) exposure and induction of childhood asthma. However, the true nature and strength of this association remains confounded in many studies, producing inconsistent evidence. To look for sources of potential bias and try to uncover consistent patterns of relative risk estimates (RRs), we conducted a meta-analysis of studies published between 1970 and 2005. Data sources Through an extensive literature search, we identified 38 epidemiologic studies of SHS exposure and the development of childhood asthma (that also controlled for atopy history) from 300 potentially relevant articles. Data synthesis We observed substantial heterogeneity within initial summary RRs of 1.48 [95% confidence interval (CI), 1.32–1.65], 1.25 (1.21–1.30), and 1.21 (1.08–1.36), for ever, current, and incident asthma, respectively. Lack of control for type of atopy history (familial or child) and child’s own smoking status within studies and age category altered summary RRs in separate meta-regressions. After adjusting for these confounding characteristics, consistent patterns of association emerged between SHS exposure and childhood asthma induction. Our summary RR of 1.33 (95% CI, 1.14–1.56) from studies of incident asthma among older children (6–18 years of age) is 1.27 times the estimate from studies of younger children and higher than estimates reported in earlier meta-analyses. Conclusions This new finding indicates that exposure duration may be a more important factor in the induction of asthma than previously understood, and suggests that SHS could be a more fundamental and widespread cause of childhood asthma than some previous meta-analyses have indicated. PMID:17938726
Li, L; Kleinman, K; Gillman, M W
2014-12-01
We implemented six confounding adjustment methods: (1) covariate-adjusted regression, (2) propensity score (PS) regression, (3) PS stratification, (4) PS matching with two calipers, (5) inverse probability weighting and (6) doubly robust estimation to examine the associations between the body mass index (BMI) z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding v. formula only (n=437) and cesarean section v. vaginal delivery (n=1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were -0.33 (95% CI -0.53, -0.13) and -0.24 (-0.46, -0.02), respectively. The other approaches resulted in smaller n (204-276) because of poor overlap of covariates, but CIs were of similar width except for inverse probability weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from -0.01 to -0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method.
Kleinman, Ken; Gillman, Matthew W.
2014-01-01
We implemented 6 confounding adjustment methods: 1) covariate-adjusted regression, 2) propensity score (PS) regression, 3) PS stratification, 4) PS matching with two calipers, 5) inverse-probability-weighting, and 6) doubly-robust estimation to examine the associations between the BMI z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding versus formula only (N = 437) and cesarean section versus vaginal delivery (N = 1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were −0.33 (95% CI −0.53, −0.13) and −0.24 (−0.46, −0.02), respectively. The other approaches resulted in smaller N (204 to 276) because of poor overlap of covariates, but CIs were of similar width except for inverse-probability-weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from −0.01 to −0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142
An introduction to using Bayesian linear regression with clinical data.
Baldwin, Scott A; Larson, Michael J
2016-12-31
Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses.
A linear regression solution to the spatial autocorrelation problem
NASA Astrophysics Data System (ADS)
Griffith, Daniel A.
The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.
50 CFR 622.281 - Adjustment of management measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.281 Adjustment of management measures. In accordance with the framework procedures of the FMP for the Dolphin and Wahoo Fishery off the Atlantic States... Atlantic dolphin and wahoo. (a) Atlantic dolphin and wahoo. Biomass levels, age-structured analyses,...
50 CFR 622.281 - Adjustment of management measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.281 Adjustment of management measures. In accordance with the framework procedures of the FMP for the Dolphin and Wahoo Fishery off the Atlantic States... Atlantic dolphin and wahoo. (a) Atlantic dolphin and wahoo. Biomass levels, age-structured analyses,...
50 CFR 622.210 - Adjustment of management measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ATLANTIC Shrimp Fishery of the South Atlantic Region § 622.210 Adjustment of management measures. In accordance with the framework procedures of the FMP for the Shrimp Fishery of the South Atlantic Region, the... shrimp. (a) Biomass levels, age-structured analyses, BRD certification criteria, BRD specifications,...
50 CFR 622.210 - Adjustment of management measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ATLANTIC Shrimp Fishery of the South Atlantic Region § 622.210 Adjustment of management measures. In accordance with the framework procedures of the FMP for the Shrimp Fishery of the South Atlantic Region, the... shrimp. (a) Biomass levels, age-structured analyses, BRD certification criteria, BRD specifications,...
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur
2012-04-01
SummaryIn this article, an approach using Bayesian Generalised Least Squares (BGLS) regression in a region-of-influence (ROI) framework is proposed for regional flood frequency analysis (RFFA) for ungauged catchments. Using the data from 399 catchments in eastern Australia, the BGLS-ROI is constructed to regionalise the flood quantiles (Quantile Regression Technique (QRT)) and the first three moments of the log-Pearson type 3 (LP3) distribution (Parameter Regression Technique (PRT)). This scheme firstly develops a fixed region model to select the best set of predictor variables for use in the subsequent regression analyses using an approach that minimises the model error variance while also satisfying a number of statistical selection criteria. The identified optimal regression equation is then used in the ROI experiment where the ROI is chosen for a site in question as the region that minimises the predictive uncertainty. To evaluate the overall performances of the quantiles estimated by the QRT and PRT, a one-at-a-time cross-validation procedure is applied. Results of the proposed method indicate that both the QRT and PRT in a BGLS-ROI framework lead to more accurate and reliable estimates of flood quantiles and moments of the LP3 distribution when compared to a fixed region approach. Also the BGLS-ROI can deal reasonably well with the heterogeneity in Australian catchments as evidenced by the regression diagnostics. Based on the evaluation statistics it was found that both BGLS-QRT and PRT-ROI perform similarly well, which suggests that the PRT is a viable alternative to QRT in RFFA. The RFFA methods developed in this paper is based on the database available in eastern Australia. It is expected that availability of a more comprehensive database (in terms of both quality and quantity) will further improve the predictive performance of both the fixed and ROI based RFFA methods presented in this study, which however needs to be investigated in future when such a
Salem, Rany M.; O'Connor, Daniel T.
2010-01-01
Most, if not all, human phenotypes exhibit a temporal, dosage-dependent, or age effect. Despite this fact, it is rare that data are collected over time or in sequence in relevant studies of the determinants of these phenotypes. The costs and organizational sophistication necessary to collect repeated measurements or longitudinal data for a given phenotype are clearly impediments to this, but greater efforts in this area are needed if insights into human phenotypic expression are to be obtained. Appropriate data analysis methods for genetic association studies involving repeated or longitudinal measures are also needed. We consider the use of longitudinal profiles obtained from fitted functions on repeated data collections from a set of individuals whose similarities are contrasted between sets of individuals with different genotypes to test hypotheses about genetic influences on time-dependent phenotype expression. The proposed approach can accommodate uncertainty of the fitted functions, as well as weighting factors across the time points, and is easily extended to a wide variety of complex analysis settings. We showcase the proposed approach with data from a clinical study investigating human blood vessel response to tyramine. We also compare the proposed approach with standard analytic procedures and investigate its robustness and power via simulation studies. The proposed approach is found to be quite flexible and performs either as well or better than traditional statistical methods. PMID:20423962
ERIC Educational Resources Information Center
Gramlich, Stephen Peter
2010-01-01
Open door admissions at community colleges bring returning adults, first timers, low achievers, disabled persons, and immigrants. Passing and retention rates for remedial and non-developmental math courses can be comparatively inadequate (LAVC, 2005; CCPRDC, 2000; SBCC, 2004; Seybert & Soltz, 1992; Waycaster, 2002). Mathematics achievement…
ERIC Educational Resources Information Center
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
Wanninkhof, R.
2003-05-21
As part of the global synthesis effort sponsored by the Global Carbon Cycle project of the National Oceanic and Atmospheric Administration (NOAA) and U.S. Department of Energy, a comprehensive comparison was performed of inorganic carbon parameters measured on oceanographic surveys carried out under auspices of the Joint Global Ocean Flux Study and related programs. Many of the cruises were performed as part of the World Hydrographic Program of the World Ocean Circulation Experiment and the NOAA Ocean-Atmosphere Carbon Exchange Study. Total dissolved inorganic carbon (DIC), total alkalinity (TAlk), fugacity of CO{sub 2}, and pH data from twenty-three cruises were checked to determine whether there were systematic offsets of these parameters between cruises. The focus was on the DIC and TAlk state variables. Data quality and offsets of DIC and TAlk were determined by using several different techniques. One approach was based on crossover analyses, where the deep-water concentrations of DIC and TAlk were compared for stations on different cruises that were within 100 km of each other. Regional comparisons were also made by using a multiple-parameter linear regression technique in which DIC or TAlk was regressed against hydrographic and nutrient parameters. When offsets of greater than 4 {micro}mol/kg were observed for DIC and/or 6 {micro}mol/kg were observed for TAlk, the data taken on the cruise were closely scrutinized to determine whether the offsets were systematic. Based on these analyses, the DIC data and TAlk data of three cruises were deemed of insufficient quality to be included in the comprehensive basinwide data set. For several of the cruises, small adjustments in TAlk were recommended for consistency with other cruises in the region. After these adjustments were incorporated, the inorganic carbon data from all cruises along with hydrographic, chlorofluorocarbon, and nutrient data were combined as a research quality product for the scientific community.
Dynamic Adjustment of Stimuli in Real Time Functional Magnetic Resonance Imaging
Feng, I. Jung; Jack, Anthony I.; Tatsuoka, Curtis
2015-01-01
The conventional fMRI image analysis approach to associating stimuli to brain activation is performed by carrying out a massive number of parallel univariate regression analyses. fMRI blood-oxygen-level dependent (BOLD) signal, the basis of these analyses, is known for its low signal-noise-ratio and high spatial and temporal signal correlation. In order to ensure accurate localization of brain activity, stimulus administration in an fMRI session is often lengthy and repetitive. Real-time fMRI BOLD signal analysis is carried out as the signal is observed. This method allows for dynamic, real-time adjustment of stimuli through sequential experimental designs. We have developed a voxel-wise sequential probability ratio test (SPRT) approach for dynamically determining localization, as well as decision rules for stopping stimulus administration. SPRT methods and general linear model (GLM) approaches are combined to identify brain regions that are activated by specific elements of stimuli. Stimulus administration is dynamically stopped when sufficient statistical evidence is collected to determine activation status across regions of interest, following predetermined statistical error thresholds. Simulation experiments and an example based on real fMRI data show that scan volumes can be substantially reduced when compared with pre-determined, fixed designs while achieving similar or better accuracy in detecting activated voxels. Moreover, the proposed approach is also able to accurately detect differentially activated areas, and other comparisons between task-related GLM parameters that can be formulated in a hypothesis-testing framework. Finally, we give a demonstration of SPRT being employed in conjunction with a halving algorithm to dynamically adjust stimuli. PMID:25785856
Investigating bias in squared regression structure coefficients
Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce
2015-01-01
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273
Investigating bias in squared regression structure coefficients.
Nimon, Kim F; Zientek, Linda R; Thompson, Bruce
2015-01-01
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
MCCB warm adjustment testing concept
NASA Astrophysics Data System (ADS)
Erdei, Z.; Horgos, M.; Grib, A.; Preradović, D. M.; Rodic, V.
2016-08-01
This paper presents an experimental investigation in to operating of thermal protection device behavior from an MCCB (Molded Case Circuit Breaker). One of the main functions of the circuit breaker is to assure protection for the circuits where mounted in for possible overloads of the circuit. The tripping mechanism for the overload protection is based on a bimetal movement during a specific time frame. This movement needs to be controlled and as a solution to control this movement we choose the warm adjustment concept. This concept is meant to improve process capability control and final output. The warm adjustment device design will create a unique adjustment of the bimetal position for each individual breaker, determined when the testing current will flow thru a phase which needs to trip in a certain amount of time. This time is predetermined due to scientific calculation for all standard types of amperages and complies with the IEC 60497 standard requirements.
Relative risk regression analysis of epidemiologic data.
Prentice, R L
1985-11-01
Relative risk regression methods are described. These methods provide a unified approach to a range of data analysis problems in environmental risk assessment and in the study of disease risk factors more generally. Relative risk regression methods are most readily viewed as an outgrowth of Cox's regression and life model. They can also be viewed as a regression generalization of more classical epidemiologic procedures, such as that due to Mantel and Haenszel. In the context of an epidemiologic cohort study, relative risk regression methods extend conventional survival data methods and binary response (e.g., logistic) regression models by taking explicit account of the time to disease occurrence while allowing arbitrary baseline disease rates, general censorship, and time-varying risk factors. This latter feature is particularly relevant to many environmental risk assessment problems wherein one wishes to relate disease rates at a particular point in time to aspects of a preceding risk factor history. Relative risk regression methods also adapt readily to time-matched case-control studies and to certain less standard designs. The uses of relative risk regression methods are illustrated and the state of development of these procedures is discussed. It is argued that asymptotic partial likelihood estimation techniques are now well developed in the important special case in which the disease rates of interest have interpretations as counting process intensity functions. Estimation of relative risks processes corresponding to disease rates falling outside this class has, however, received limited attention. The general area of relative risk regression model criticism has, as yet, not been thoroughly studied, though a number of statistical groups are studying such features as tests of fit, residuals, diagnostics and graphical procedures. Most such studies have been restricted to exponential form relative risks as have simulation studies of relative risk estimation
Weighing Evidence "Steampunk" Style via the Meta-Analyser.
Bowden, Jack; Jackson, Chris
2016-10-01
The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.
Comparable-Worth Adjustments: Yes--Comparable-Worth Adjustments: No.
ERIC Educational Resources Information Center
Galloway, Sue; O'Neill, June
1985-01-01
Two essays address the issue of pay equity and present opinions favoring and opposing comparable-worth adjustments. Movement of women out of traditionally female jobs, the limits of "equal pay," fairness of comparable worth and market-based wages, implementation and efficiency of comparable worth system, and alternatives to comparable…
Technology Transfer Automated Retrieval System (TEKTRAN)
In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...
NASA Astrophysics Data System (ADS)
Darnah
2016-04-01
Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.
Regressive language in severe head injury.
Thomsen, I V; Skinhoj, E
1976-09-01
In a follow-up study of 50 patients with severe head injuries three patients had echolalia. One patient with initially global aphasia had echolalia for some weeks when he started talking. Another patient with severe diffuse brain damage, dementia, and emotional regression had echolalia. The dysfunction was considered a detour performance. In the third patient echolalia and palilalia were details in a total pattern of regression lasting for months. The patient, who had extensive frontal atrophy secondary to a very severe head trauma, presented an extreme state of regression returning to a foetal-body pattern and behaving like a baby.
Regression of altitude-produced cardiac hypertrophy.
NASA Technical Reports Server (NTRS)
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
Regression Discontinuity for Causal Effect Estimation in Epidemiology.
Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till
Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.
Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas
2015-12-01
The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-11
... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment; Quarterly Rail... Railroads that the Board restate the previously published productivity adjustment for the 2003-2007 averaging period (2007 productivity adjustment) so that it tracks the 2007 productivity adjustment...
A new bivariate negative binomial regression model
NASA Astrophysics Data System (ADS)
Faroughi, Pouya; Ismail, Noriszura
2014-12-01
This paper introduces a new form of bivariate negative binomial (BNB-1) regression which can be fitted to bivariate and correlated count data with covariates. The BNB regression discussed in this study can be fitted to bivariate and overdispersed count data with positive, zero or negative correlations. The joint p.m.f. of the BNB1 distribution is derived from the product of two negative binomial marginals with a multiplicative factor parameter. Several testing methods were used to check overdispersion and goodness-of-fit of the model. Application of BNB-1 regression is illustrated on Malaysian motor insurance dataset. The results indicated that BNB-1 regression has better fit than bivariate Poisson and BNB-2 models with regards to Akaike information criterion.
Some Simple Computational Formulas for Multiple Regression
ERIC Educational Resources Information Center
Aiken, Lewis R., Jr.
1974-01-01
Short-cut formulas are presented for direct computation of the beta weights, the standard errors of the beta weights, and the multiple correlation coefficient for multiple regression problems involving three independent variables and one dependent variable. (Author)
An introduction to multilevel regression models.
Austin, P C; Goel, V; van Walraven, C
2001-01-01
Data in health research are frequently structured hierarchically. For example, data may consist of patients nested within physicians, who in turn may be nested in hospitals or geographic regions. Fitting regression models that ignore the hierarchical structure of the data can lead to false inferences being drawn from the data. Implementing a statistical analysis that takes into account the hierarchical structure of the data requires special methodologies. In this paper, we introduce the concept of hierarchically structured data, and present an introduction to hierarchical regression models. We then compare the performance of a traditional regression model with that of a hierarchical regression model on a dataset relating test utilization at the annual health exam with patient and physician characteristics. In comparing the resultant models, we see that false inferences can be drawn by ignoring the structure of the data.
Multiple Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
This slide presentation reviews the use of multiple instance regression with structured data from multiple and related data sets. It applies the concept to a practical problem, that of estimating crop yield using remote sensed country wide weekly observations.
Bayesian Comparison of Two Regression Lines.
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
1978-01-01
A Bayesian solution is presented for the Johnson-Neyman problem (whether or not the distance between two regression lines is statistically significant over a finite interval of the independent variable). (Author/CTM)
TWSVR: Regression via Twin Support Vector Machine.
Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh
2016-02-01
Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets.
Greenberg, J A; Rahman, S; Saint-Preux, S; Owen, D R; Boozer, C N
1999-10-01
Previous studies of the relationship between plasma leptin and energy usage have yielded contradictory findings. The present study was therefore conducted to clearly distinguish and measure the energy usage rate and the energy usage rate adjusted for a surrogate of metabolically active tissue mass. We investigated the simultaneous relationships between these two measures of energy usage, leptin, and body fat in 21-month-old adult male Fischer 344 rats on three different long-term dietary regimens: (1) continuous ad libitum feeding (Ad-lib); (2) ad libitum feeding until early adulthood, and then continuous 60% caloric restriction (CR); and (3) ad libitum feeding until early adulthood, then 60% caloric restriction until 16 months, and then ad libitum feeding for 5 months (CR/Ad-lib). Two versions of the daily usage rate were measured: daily dietary caloric intake (DCI), and daily energy expenditure (EE) based on indirect calorimetry. Two versions of the metabolically active tissue mass were also measured: fat-free mass (FFM), and the sum of the weight of the heart, brain, liver, and kidneys. Energy usage rates were adjusted for these measures of metabolically active tissue mass to yield measures of the energy metabolic rate. Correlation, regression, and path analyses showed that both the energy usage rate and adjusted energy usage rate played important independent roles in determining body fat and plasma leptin, but only after multivariate techniques were used to account for the simultaneous interactions between variables. Increases in the energy usage rate were associated with increases in body fat and the adjusted energy usage rate. Increases in the adjusted energy usage rate were associated with decreases in body fat and plasma leptin. These findings suggest that differences in subjects adjusted energy usage rate could explain some of the apparently contradictory findings concerning the relationship between energy usage and plasma leptin in previously published
Marginal longitudinal semiparametric regression via penalized splines
Kadiri, M. Al; Carroll, R.J.; Wand, M.P.
2010-01-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models. PMID:21037941
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
The Geometry of Enhancement in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.
2011-01-01
In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and…
Adjustable Optical-Fiber Attenuator
NASA Technical Reports Server (NTRS)
Buzzetti, Mike F.
1994-01-01
Adjustable fiber-optic attenuator utilizes bending loss to reduce strength of light transmitted along it. Attenuator functions without introducing measurable back-reflection or insertion loss. Relatively insensitive to vibration and changes in temperature. Potential applications include cable television, telephone networks, other signal-distribution networks, and laboratory instrumentation.
Dyadic Adjustment: An Ecosystemic Examination.
ERIC Educational Resources Information Center
Wilson, Stephan M.; Larson, Jeffry H.; McCulloch, B. Jan; Stone, Katherine L.
1997-01-01
Examines the relationship of background, individual, and family influences on dyadic adjustment, using an ecological perspective. Data from 102 married couples were used. Age at marriage for husbands, emotional health for wives, and number of marriage and family problems as well as family life satisfaction for both were related to dyadic…
Problems of Adjustment to School.
ERIC Educational Resources Information Center
Bartolini, Leandro A.
This paper, one of several written for a comprehensive policy study of early childhood education in Illinois, examines and summarizes the literature on the problems of young children in adjusting to starting school full-time and describes the nature and extent of their difficulties in relation to statewide educational policy. The review of studies…
Economic Pressures and Family Adjustment.
ERIC Educational Resources Information Center
Haccoun, Dorothy Markiewicz; Ledingham, Jane E.
The relationships between economic stress on the family and child and parental adjustment were examined for a sample of 199 girls and boys in grades one, four, and seven. These associations were examined separately for families in which both parents were present and in which mothers only were at home. Economic stress was associated with boys'…
[Iris movement mediates pupillary membrane regression].
Morizane, Yuki
2007-11-01
In the course of mammalian lens development, a transient capillary meshwork called as the pupillary membrane (PM) forms. It is located in the pupil area to nourish the anterior surface of the lens, and then regresses to clear the optical path. Although the involvement of the apoptotic process has been reported in PM regression, the initiating factor remains unknown. We initially found that regression of the PM coincided with the development of iris motility, and that iris movement caused cessation and resumption of blood flow within the PM. Therefore, we investigated whether the development of the capacity of the iris to constrict and dilate can function as an essential signal that induces apoptosis in the PM. Continuous inhibition of iris movement with mydriatic agents suppressed apoptosis of the PM and resulted in the persistence of PM in rats. The distribution of apoptotic cells in the regressing PM was diffuse and showed no apparent localization. These results indicated that iris movement induced regression of the PM by changing the blood flow within it. This study suggests the importance of the physiological interactions between tissues-in this case, the iris and the PM-as a signal to advance vascular regression during organ development.
Multiple-Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
We present a multiple-instance regression algorithm that models internal bag structure to identify the items most relevant to the bag labels. Multiple-instance regression (MIR) operates on a set of bags with real-valued labels, each containing a set of unlabeled items, in which the relevance of each item to its bag label is unknown. The goal is to predict the labels of new bags from their contents. Unlike previous MIR methods, MI-ClusterRegress can operate on bags that are structured in that they contain items drawn from a number of distinct (but unknown) distributions. MI-ClusterRegress simultaneously learns a model of the bag's internal structure, the relevance of each item, and a regression model that accurately predicts labels for new bags. We evaluated this approach on the challenging MIR problem of crop yield prediction from remote sensing data. MI-ClusterRegress provided predictions that were more accurate than those obtained with non-multiple-instance approaches or MIR methods that do not model the bag structure.
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio (PR) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95%CI: 1.005-1.265), 1.128(95%CI: 1.001-1.264) and 1.132(95%CI: 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95% CI: 1.055-1.206) and 1.126(95% CI: 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR, which was 1.125 (95%CI: 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR. Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Cefalu, Matthew; Dominici, Francesca
2014-07-01
In environmental epidemiology, we are often faced with 2 challenges. First, an exposure prediction model is needed to estimate the exposure to an agent of interest, ideally at the individual level. Second, when estimating the health effect associated with the exposure, confounding adjustment is needed in the health-effects regression model. The current literature addresses these 2 challenges separately. That is, methods that account for measurement error in the predicted exposure often fail to acknowledge the possibility of confounding, whereas methods designed to control confounding often fail to acknowledge that the exposure has been predicted. In this article, we consider exposure prediction and confounding adjustment in a health-effects regression model simultaneously. Using theoretical arguments and simulation studies, we show that the bias of a health-effect estimate is influenced by the exposure prediction model, the type of confounding adjustment used in the health-effects regression model, and the relationship between these 2. Moreover, we argue that even with a health-effects regression model that properly adjusts for confounding, the use of a predicted exposure can bias the health-effect estimate unless all confounders included in the health-effects regression model are also included in the exposure prediction model. While these results of this article were motivated by studies of environmental contaminants, they apply more broadly to any context where an exposure needs to be predicted.
Favorable Selection, Risk Adjustment, and the Medicare Advantage Program
Morrisey, Michael A; Kilgore, Meredith L; Becker, David J; Smith, Wilson; Delzell, Elizabeth
2013-01-01
Objectives To examine the effects of changes in payment and risk adjustment on (1) the annual enrollment and switching behavior of Medicare Advantage (MA) beneficiaries, and (2) the relative costliness of MA enrollees and disenrollees. Data From 1999 through 2008 national Medicare claims data from the 5 percent longitudinal sample of Parts A and B expenditures. Study Design Retrospective, fixed effects regression analysis of July enrollment and year-long switching into and out of MA. Similar regression analysis of the costliness of those switching into (out of) MA in the 6 months prior to enrollment (after disenrollment) relative to nonswitchers in the same county over the same period. Findings Payment generosity and more sophisticated risk adjustment were associated with substantial increases in MA enrollment and decreases in disenrollment. Claims experience of those newly switching into MA was not affected by any of the policy reforms, but disenrollment became increasingly concentrated among high-cost beneficiaries. Conclusions Enrollment is very sensitive to payment levels. The use of more sophisticated risk adjustment did not alter favorable selection into MA, but it did affect the costliness of disenrollees. PMID:23088500
Defense mechanisms and psychological adjustment in childhood.
Sandstrom, Marlene J; Cramer, Phebe
2003-08-01
The association between maturity of defense use and psychological functioning was assessed in a group of 95 elementary school children. Defense mechanisms were measured using a valid and reliable storytelling task, and psychological adjustment was assessed through a combination of parent and self-report questionnaires. Correlational analyses indicated that children who relied on the developmentally immature defense of denial reported higher levels of self-rated social anxiety and depression and received higher ratings of parent-reported internalizing and externalizing behavior problems. However, children who made use of the developmentally mature defense of identification exhibited higher scores on perceived competence in social, academic, conduct, athletic, and global domains. Significantly, there was no relationship between children's use of denial and their level of perceived competence or between children's use of identification and their degree of maladjustment.
Poisson Regression Analysis of Illness and Injury Surveillance Data
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra
Chang, Chun-Ming; Yin, Wen-Yao; Wei, Chang-Kao; Wu, Chin-Chia; Su, Yu-Chieh; Yu, Chia-Hui; Lee, Ching-Chih
2016-01-01
Background Identification of patients at risk of death from cancer surgery should aid in preoperative preparation. The purpose of this study is to assess and adjust the age-adjusted Charlson comorbidity index (ACCI) to identify cancer patients with increased risk of perioperative mortality. Methods We identified 156,151 patients undergoing surgery for one of the ten common cancers between 2007 and 2011 in the Taiwan National Health Insurance Research Database. Half of the patients were randomly selected, and a multivariate logistic regression analysis was used to develop an adjusted-ACCI score for estimating the risk of 90-day mortality by variables from the original ACCI. The score was validated. The association between the score and perioperative mortality was analyzed. Results The adjusted-ACCI score yield a better discrimination on mortality after cancer surgery than the original ACCI score, with c-statics of 0.75 versus 0.71. Over 80 years of age, 70–80 years, and renal disease had the strongest impact on mortality, hazard ratios 8.40, 3.63, and 3.09 (P < 0.001), respectively. The overall 90-day mortality rates in the entire cohort varied from 0.9%, 2.9%, 7.0%, and 13.2% in four risk groups stratifying by the adjusted-ACCI score; the adjusted hazard ratio for score 4–7, 8–11, and ≥ 12 was 2.84, 6.07, and 11.17 (P < 0.001), respectively, in 90-day mortality compared to score 0–3. Conclusions The adjusted-ACCI score helps to identify patients with a higher risk of 90-day mortality after cancer surgery. It might be particularly helpful for preoperative evaluation of patients over 80 years of age. PMID:26848761
Molloy, Cynthia A; Morrow, Ardythe L; Meinzen-Derr, Jareen; Dawson, Geraldine; Bernier, Raphael; Dunn, Michelle; Hyman, Susan L; McMahon, William M; Goudie-Nice, Julie; Hepburn, Susan; Minshew, Nancy; Rogers, Sally; Sigman, Marian; Spence, M Anne; Tager-Flusberg, Helen; Volkmar, Fred R; Lord, Catherine
2006-04-01
A multicenter study of 308 children with Autism Spectrum Disorder (ASD) was conducted through the Collaborative Programs of Excellence in Autism (CPEA), sponsored by the National Institute of Child Health and Human Development, to compare the family history of autoimmune disorders in children with ASD with and without a history of regression. A history of regression was determined from the results of the Autism Diagnostic Interview-Revised (ADI-R). Family history of autoimmune disorders was obtained by telephone interview. Regression was significantly associated with a family history of autoimmune disorders (adjusted OR=1.89; 95% CI: 1.17, 3.10). The only specific autoimmune disorder found to be associated with regression was autoimmune thyroid disease (adjusted OR=2.09; 95% CI: 1.28, 3.41).
Time course for tail regression during metamorphosis of the ascidian Ciona intestinalis.
Matsunobu, Shohei; Sasakura, Yasunori
2015-09-01
In most ascidians, the tadpole-like swimming larvae dramatically change their body-plans during metamorphosis and develop into sessile adults. The mechanisms of ascidian metamorphosis have been researched and debated for many years. Until now information on the detailed time course of the initiation and completion of each metamorphic event has not been described. One dramatic and important event in ascidian metamorphosis is tail regression, in which ascidian larvae lose their tails to adjust themselves to sessile life. In the present study, we measured the time associated with tail regression in the ascidian Ciona intestinalis. Larvae are thought to acquire competency for each metamorphic event in certain developmental periods. We show that the timing with which the competence for tail regression is acquired is determined by the time since hatching, and this timing is not affected by the timing of post-hatching events such as adhesion. Because larvae need to adhere to substrates with their papillae to induce tail regression, we measured the duration for which larvae need to remain adhered in order to initiate tail regression and the time needed for the tail to regress. Larvae acquire the ability to adhere to substrates before they acquire tail regression competence. We found that when larvae adhered before they acquired tail regression competence, they were able to remember the experience of adhesion until they acquired the ability to undergo tail regression. The time course of the events associated with tail regression provides a valuable reference, upon which the cellular and molecular mechanisms of ascidian metamorphosis can be elucidated.
Reconstruction of missing daily streamflow data using dynamic regression models
NASA Astrophysics Data System (ADS)
Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault
2015-12-01
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount of each civil money penalty within the OCC's jurisdiction is adjusted in accordance with the...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
Adjusting to University: The Hong Kong Experience
ERIC Educational Resources Information Center
Yau, Hon Keung; Sun, Hongyi; Cheng, Alison Lai Fong
2012-01-01
Students' adjustment to the university environment is an important factor in predicting university outcomes and is crucial to their future achievements. University support to students' transition to university life can be divided into three dimensions: academic adjustment, social adjustment and psychological adjustment. However, these…
Geodesic least squares regression for scaling studies in magnetic confinement fusion
Verdoolaege, Geert
2015-01-13
In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.
MULTILINEAR TENSOR REGRESSION FOR LONGITUDINAL RELATIONAL DATA.
Hoff, Peter D
2015-09-01
A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity.
Hyperglycemia impairs atherosclerosis regression in mice.
Gaudreault, Nathalie; Kumar, Nikit; Olivas, Victor R; Eberlé, Delphine; Stephens, Kyle; Raffai, Robert L
2013-12-01
Diabetic patients are known to be more susceptible to atherosclerosis and its associated cardiovascular complications. However, the effects of hyperglycemia on atherosclerosis regression remain unclear. We hypothesized that hyperglycemia impairs atherosclerosis regression by modulating the biological function of lesional macrophages. HypoE (Apoe(h/h)Mx1-Cre) mice express low levels of apolipoprotein E (apoE) and develop atherosclerosis when fed a high-fat diet. Atherosclerosis regression occurs in these mice upon plasma lipid lowering induced by a change in diet and the restoration of apoE expression. We examined the morphological characteristics of regressed lesions and assessed the biological function of lesional macrophages isolated with laser-capture microdissection in euglycemic and hyperglycemic HypoE mice. Hyperglycemia induced by streptozotocin treatment impaired lesion size reduction (36% versus 14%) and lipid loss (38% versus 26%) after the reversal of hyperlipidemia. However, decreases in lesional macrophage content and remodeling in both groups of mice were similar. Gene expression analysis revealed that hyperglycemia impaired cholesterol transport by modulating ATP-binding cassette A1, ATP-binding cassette G1, scavenger receptor class B family member (CD36), scavenger receptor class B1, and wound healing pathways in lesional macrophages during atherosclerosis regression. Hyperglycemia impairs both reduction in size and loss of lipids from atherosclerotic lesions upon plasma lipid lowering without significantly affecting the remodeling of the vascular wall.
Regression models for estimating coseismic landslide displacement
Jibson, R.W.
2007-01-01
Newmark's sliding-block model is widely used to estimate coseismic slope performance. Early efforts to develop simple regression models to estimate Newmark displacement were based on analysis of the small number of strong-motion records then available. The current availability of a much larger set of strong-motion records dictates that these regression equations be updated. Regression equations were generated using data derived from a collection of 2270 strong-motion records from 30 worldwide earthquakes. The regression equations predict Newmark displacement in terms of (1) critical acceleration ratio, (2) critical acceleration ratio and earthquake magnitude, (3) Arias intensity and critical acceleration, and (4) Arias intensity and critical acceleration ratio. These equations are well constrained and fit the data well (71% < R2 < 88%), but they have standard deviations of about 0.5 log units, such that the range defined by the mean ?? one standard deviation spans about an order of magnitude. These regression models, therefore, are not recommended for use in site-specific design, but rather for regional-scale seismic landslide hazard mapping or for rapid preliminary screening of sites. ?? 2007 Elsevier B.V. All rights reserved.
Coverage-adjusted entropy estimation.
Vu, Vincent Q; Yu, Bin; Kass, Robert E
2007-09-20
Data on 'neural coding' have frequently been analyzed using information-theoretic measures. These formulations involve the fundamental and generally difficult statistical problem of estimating entropy. We review briefly several methods that have been advanced to estimate entropy and highlight a method, the coverage-adjusted entropy estimator (CAE), due to Chao and Shen that appeared recently in the environmental statistics literature. This method begins with the elementary Horvitz-Thompson estimator, developed for sampling from a finite population, and adjusts for the potential new species that have not yet been observed in the sample-these become the new patterns or 'words' in a spike train that have not yet been observed. The adjustment is due to I. J. Good, and is called the Good-Turing coverage estimate. We provide a new empirical regularization derivation of the coverage-adjusted probability estimator, which shrinks the maximum likelihood estimate. We prove that the CAE is consistent and first-order optimal, with rate O(P)(1/log n), in the class of distributions with finite entropy variance and that, within the class of distributions with finite qth moment of the log-likelihood, the Good-Turing coverage estimate and the total probability of unobserved words converge at rate O(P)(1/(log n)(q)). We then provide a simulation study of the estimator with standard distributions and examples from neuronal data, where observations are dependent. The results show that, with a minor modification, the CAE performs much better than the MLE and is better than the best upper bound estimator, due to Paninski, when the number of possible words m is unknown or infinite.
Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323
Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.
Spontaneous skin regression and predictors of skin regression in Thai scleroderma patients.
Foocharoen, Chingching; Mahakkanukrauh, Ajanee; Suwannaroj, Siraphop; Nanagara, Ratanavadee
2011-09-01
Skin tightness is a major clinical manifestation of systemic sclerosis (SSc). Importantly for both clinicians and patients, spontaneous regression of the fibrosis process has been documented. The purpose of this study is to identify the incidence and related clinical characteristics of spontaneous regression among Thai SSc patients. A historical cohort with 4 years of follow-up was performed among SSc patients over 15 years of age diagnosed with SSc between January 1, 2005 and December 31, 2006 in Khon Kaen, Thailand. The start date was the date of the first symptom and the end date was the date of the skin score ≤2. To estimate the respective probability of regression and to assess the associated factors, the Kaplan-Meier method and Cox regression analysis was used. One hundred seventeen cases of SSc were included with a female to male ratio of 1.5:1. Thirteen patients (11.1%) experienced regression. The incidence rate of spontaneous skin regression was 0.31 per 100 person-months and the average duration of SSc at the time of regression was 35.9±15.6 months (range, 15.7-60 months). The factors that negatively correlated with regression were (a) diffuse cutaneous type, (b) Raynaud's phenomenon, (c) esophageal dysmotility, and (d) colchicine treatment at onset with a respective hazard ratio (HR) of 0.19, 0.19, 0.26, and 0.20. By contrast, the factor that positively correlated with regression was active alveolitis with cyclophosphamide therapy at onset with an HR of 4.23 (95% CI, 1.23-14.10). After regression analysis, only Raynaud's phenomenon at onset and diffuse cutaneous type had a significantly negative correlation to regression. A spontaneous regression of the skin fibrosis process was not uncommon among Thai SSc patients. The factors suggesting a poor predictor for cutaneous manifestation were Raynaud's phenomenon, diffuse cutaneous type while early cyclophosphamide therapy might be related to a better skin outcome.
Analyzing industrial energy use through ordinary least squares regression models
NASA Astrophysics Data System (ADS)
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and
Epidemiology of CKD Regression in Patients under Nephrology Care.
Borrelli, Silvio; Leonardis, Daniela; Minutolo, Roberto; Chiodini, Paolo; De Nicola, Luca; Esposito, Ciro; Mallamaci, Francesca; Zoccali, Carmine; Conte, Giuseppe
2015-01-01
Chronic Kidney Disease (CKD) regression is considered as an infrequent renal outcome, limited to early stages, and associated with higher mortality. However, prevalence, prognosis and the clinical correlates of CKD regression remain undefined in the setting of nephrology care. This is a multicenter prospective study in 1418 patients with established CKD (eGFR: 60-15 ml/min/1.73m²) under nephrology care in 47 outpatient clinics in Italy from a least one year. We defined CKD regressors as a ΔGFR ≥0 ml/min/1.73 m2/year. ΔGFR was estimated as the absolute difference between eGFR measured at baseline and at follow up visit after 18-24 months, respectively. Outcomes were End Stage Renal Disease (ESRD) and overall-causes Mortality.391 patients (27.6%) were identified as regressors as they showed an eGFR increase between the baseline visit in the renal clinic and the follow up visit. In multivariate regression analyses the regressor status was not associated with CKD stage. Low proteinuria was the main factor associated with CKD regression, accounting per se for 48% of the likelihood of this outcome. Lower systolic blood pressure, higher BMI and absence of autosomal polycystic disease (PKD) were additional predictors of CKD regression. In regressors, ESRD risk was 72% lower (HR: 0.28; 95% CI 0.14-0.57; p<0.0001) while mortality risk did not differ from that in non-regressors (HR: 1.16; 95% CI 0.73-1.83; p = 0.540). Spline models showed that the reduction of ESRD risk associated with positive ΔGFR was attenuated in advanced CKD stage. CKD regression occurs in about one-fourth patients receiving renal care in nephrology units and correlates with low proteinuria, BP and the absence of PKD. This condition portends better renal prognosis, mostly in earlier CKD stages, with no excess risk for mortality.
Sternberg, Maya R.; Schleicher, Rosemary L.; Pfeiffer, Christine M.
2016-01-01
The collection of papers in this journal supplement provides insight into the association of various covariates with concentrations of biochemical indicators of diet and nutrition (biomarkers), beyond age, race and sex using linear regression. We studied 10 specific sociodemographic and lifestyle covariates in combination with 29 biomarkers from NHANES 2003–2006 for persons ≥20 y. The covariates were organized into 2 chunks, sociodemographic (age, sex, race-ethnicity, education, and income) and lifestyle (dietary supplement use, smoking, alcohol consumption, BMI, and physical activity) and fit in hierarchical fashion using each chunk or set of related variables to determine how covariates, jointly, are related to biomarker concentrations. In contrast to many regression modeling applications, all variables were retained in a full regression model regardless of statistical significance to preserve the interpretation of the statistical properties of beta coefficients, P-values and CI, and to keep the interpretation consistent across a set of biomarkers. The variables were pre-selected prior to data analysis and the data analysis plan was designed at the outset to minimize the reporting of false positive findings by limiting the amount of preliminary hypothesis testing. While we generally found that demographic differences seen in biomarkers were over- or under-estimated when ignoring other key covariates, the demographic differences generally remained statistically significant after adjusting for sociodemographic and lifestyle variables. These papers are intended to provide a foundation to researchers to help them generate hypotheses for future studies or data analyses and/or develop predictive regression models using the wealth of NHANES data. PMID:23596165
Parametric modeling of quantile regression coefficient functions.
Frumento, Paolo; Bottai, Matteo
2016-03-01
Estimating the conditional quantiles of outcome variables of interest is frequent in many research areas, and quantile regression is foremost among the utilized methods. The coefficients of a quantile regression model depend on the order of the quantile being estimated. For example, the coefficients for the median are generally different from those of the 10th centile. In this article, we describe an approach to modeling the regression coefficients as parametric functions of the order of the quantile. This approach may have advantages in terms of parsimony, efficiency, and may expand the potential of statistical modeling. Goodness-of-fit measures and testing procedures are discussed, and the results of a simulation study are presented. We apply the method to analyze the data that motivated this work. The described method is implemented in the qrcm R package.
Computing aspects of power for multiple regression.
Dunlap, William P; Xin, Xue; Myers, Leann
2004-11-01
Rules of thumb for power in multiple regression research abound. Most such rules dictate the necessary sample size, but they are based only upon the number of predictor variables, usually ignoring other critical factors necessary to compute power accurately. Other guides to power in multiple regression typically use approximate rather than precise equations for the underlying distribution; entail complex preparatory computations; require interpolation with tabular presentation formats; run only under software such as Mathmatica or SAS that may not be immediately available to the user; or are sold to the user as parts of power computation packages. In contrast, the program we offer herein is immediately downloadable at no charge, runs under Windows, is interactive, self-explanatory, flexible to fit the user's own regression problems, and is as accurate as single precision computation ordinarily permits.
Uncertainty quantification in DIC with Kriging regression
NASA Astrophysics Data System (ADS)
Wang, Dezhi; DiazDelaO, F. A.; Wang, Weizhuo; Lin, Xiaoshan; Patterson, Eann A.; Mottershead, John E.
2016-03-01
A Kriging regression model is developed as a post-processing technique for the treatment of measurement uncertainty in classical subset-based Digital Image Correlation (DIC). Regression is achieved by regularising the sample-point correlation matrix using a local, subset-based, assessment of the measurement error with assumed statistical normality and based on the Sum of Squared Differences (SSD) criterion. This leads to a Kriging-regression model in the form of a Gaussian process representing uncertainty on the Kriging estimate of the measured displacement field. The method is demonstrated using numerical and experimental examples. Kriging estimates of displacement fields are shown to be in excellent agreement with 'true' values for the numerical cases and in the experimental example uncertainty quantification is carried out using the Gaussian random process that forms part of the Kriging model. The root mean square error (RMSE) on the estimated displacements is produced and standard deviations on local strain estimates are determined.
ERIC Educational Resources Information Center
Yoleri, Sibel
2015-01-01
The relationships among school adjustment, victimisation, and gender were investigated with 284 Turkish children aged between five and six years. Teacher Rating Scale of School Adjustment, The Preschool Behaviour Questionnaire, and Peer Victimisation Scale were used in this study. Analyses indicated that children's behaviour problems and…
ERIC Educational Resources Information Center
Acock, Alan C.; Kiecolt, K. Jill
1989-01-01
In analyses controlling for socioeconomic status (SES), parental divorce during adolescence produced few negative effects on adult adjustment, and father's death during adolescence produced none. However, SES during adolescence and current SES affected nearly all aspects of adult adjustment, as did mother's and own educational attainment. Contains…
Typology of Emotional and Behavioral Adjustment for Low-Income Children: A Child-Centered Approach
ERIC Educational Resources Information Center
Bulotsky-Shearer, Rebecca J.; Fantuzzo, John W.; McDermott, Paul A.
2010-01-01
An empirical typology of classroom emotional and behavioral adjustment was developed for preschool children living in urban poverty. Multistage hierarchical cluster analyses were applied to identify six distinct and reliable subtypes of classroom adjustment, differentiated by high and low levels of behavioral (aggressive, inattentive,…
Jones, Lauren A; Campbell, Jonathan M
2010-01-01
We investigated correlates of language regression for children diagnosed with autism spectrum disorders (ASD). Using archival data, children diagnosed with ASD (N = 114, M age = 41.4 months) were divided into four groups based on language development (i.e., regression, plateau, general delay, no delay) and compared on developmental, adaptive behavior, symptom severity, and behavioral adjustment variables. Few overall differences emerged between groups, including similar non-language developmental history, equal risk for seizure disorder, and comparable behavioral adjustment. Groups did not differ with respect to autism symptomatology as measured by the Autism Diagnostic Observation Schedule and Autism Diagnostic Interview-Revised. Language plateau was associated with better adaptive social skills as measured by the Vineland Adaptive Behavior Scales. Implications and study limitations are discussed.
Zou, G Y; Donner, Allan
2013-12-01
The Poisson regression model using a sandwich variance estimator has become a viable alternative to the logistic regression model for the analysis of prospective studies with independent binary outcomes. The primary advantage of this approach is that it readily provides covariate-adjusted risk ratios and associated standard errors. In this article, the model is extended to studies with correlated binary outcomes as arise in longitudinal or cluster randomization studies. The key step involves a cluster-level grouping strategy for the computation of the middle term in the sandwich estimator. For a single binary exposure variable without covariate adjustment, this approach results in risk ratio estimates and standard errors that are identical to those found in the survey sampling literature. Simulation results suggest that it is reliable for studies with correlated binary data, provided the total number of clusters is at least 50. Data from observational and cluster randomized studies are used to illustrate the methods.
Salience Assignment for Multiple-Instance Regression
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran
2007-01-01
We present a Multiple-Instance Learning (MIL) algorithm for determining the salience of each item in each bag with respect to the bag's real-valued label. We use an alternating-projections constrained optimization approach to simultaneously learn a regression model and estimate all salience values. We evaluate this algorithm on a significant real-world problem, crop yield modeling, and demonstrate that it provides more extensive, intuitive, and stable salience models than Primary-Instance Regression, which selects a single relevant item from each bag.
The Lax-Onsager regression `theorem' revisited
NASA Astrophysics Data System (ADS)
Lax, Melvin
2000-05-01
It is stated by Ford and O'Connell in this festschrift issue and elsewhere that "there is no quantum regression theorem" although Lax "obtained a formula for correlation in a driven quantum system that has come to be called the quantum regression theorem". This produces a puzzle: "How can it be that a non-existent theorem gives correct results?" Clarification will be provided in this paper by a description of the Lax procedure, with a quantitative estimate of the error for a damped harmonic oscillator based on expressions published in the 1960's.
Demonstration of a Fiber Optic Regression Probe
NASA Technical Reports Server (NTRS)
Korman, Valentin; Polzin, Kurt A.
2010-01-01
The capability to provide localized, real-time monitoring of material regression rates in various applications has the potential to provide a new stream of data for development testing of various components and systems, as well as serving as a monitoring tool in flight applications. These applications include, but are not limited to, the regression of a combusting solid fuel surface, the ablation of the throat in a chemical rocket or the heat shield of an aeroshell, and the monitoring of erosion in long-life plasma thrusters. The rate of regression in the first application is very fast, while the second and third are increasingly slower. A recent fundamental sensor development effort has led to a novel regression, erosion, and ablation sensor technology (REAST). The REAST sensor allows for measurement of real-time surface erosion rates at a discrete surface location. The sensor is optical, using two different, co-located fiber-optics to perform the regression measurement. The disparate optical transmission properties of the two fiber-optics makes it possible to measure the regression rate by monitoring the relative light attenuation through the fibers. As the fibers regress along with the parent material in which they are embedded, the relative light intensities through the two fibers changes, providing a measure of the regression rate. The optical nature of the system makes it relatively easy to use in a variety of harsh, high temperature environments, and it is also unaffected by the presence of electric and magnetic fields. In addition, the sensor could be used to perform optical spectroscopy on the light emitted by a process and collected by fibers, giving localized measurements of various properties. The capability to perform an in-situ measurement of material regression rates is useful in addressing a variety of physical issues in various applications. An in-situ measurement allows for real-time data regarding the erosion rates, providing a quick method for
Parametric expressions for the adjusted Hargreaves coefficient in Eastern Spain
NASA Astrophysics Data System (ADS)
Martí, Pau; Zarzo, Manuel; Vanderlinden, Karl; Girona, Joan
2015-10-01
The application of simple empirical equations for estimating reference evapotranspiration (ETo) is the only alternative in many cases to robust approaches with high input requirements, especially at the local scale. In particular, temperature-based approaches present a high potential applicability, among others, because temperature might explain a high amount of ETo variability, and also because it can be measured easily and is one of the most available climatic inputs. One of the most well-known temperature-based approaches, the Hargreaves (HG) equation, requires a preliminary local calibration that is usually performed through an adjustment of the HG coefficient (AHC). Nevertheless, these calibrations are site-specific, and cannot be extrapolated to other locations. So, they become useless in many situations, because they are derived from already available benchmarks based on more robust methods, which will be applied in practice. Therefore, the development of accurate equations for estimating AHC at local scale becomes a relevant task. This paper analyses the performance of calibrated and non-calibrated HG equations at 30 stations in Eastern Spain at daily, weekly, fortnightly and monthly scales. Moreover, multiple linear regression was applied for estimating AHC based on different inputs, and the resulting equations yielded higher performance accuracy than the non-calibrated HG estimates. The approach relying on the ratio mean temperature to temperature range did not provide suitable AHC estimations, and was highly improved by splitting it into two independent predictors. Temperature-based equations were improved by incorporating geographical inputs. Finally, the model relying on temperature and geographic inputs was further improved by incorporating wind speed, even just with simple qualitative information about wind category (e.g. poorly vs. highly windy). The accuracy of the calibrated and non-calibrated HG estimates increased for longer time steps (daily
Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology, particularly for determining the associations among multiple constituents of surface water and landscape configuration. Common dat...
Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology to study the associations among constituents of surface water and landscapes. Common data problems in ecological studies include: s...
Lambert, Sharon F; Roche, Kathleen M; Saleem, Farzana T; Henry, Jessica S
2015-09-01
Parents' racial socialization messages, including messages focused on awareness, preparation, and strategies for managing racial discrimination, are necessary to help African American youth successfully navigate their daily lives. However, mixed findings regarding the utility of preparation for bias messages for African American youth's mental health adjustment raise questions about the conditions under which these protective racial socialization messages are most beneficial to African American youth. The current study examined the degree to which communication and trust as well as anger and alienation in the mother-adolescent relationship moderated associations between 2 types of preparation for bias messages, cultural alertness to discrimination and cultural coping with antagonism, and adolescent mental health. Participants were 106 African American adolescents (57% female; mean age = 15.41) who reported about their receipt of racial socialization messages, mother-adolescent relationship quality, and depressive symptoms. Hierarchical regression analyses indicated that positive associations between cultural alertness to racial discrimination and youth depressive symptoms were weaker for boys in the context of higher mother-adolescent communication and trust; communication and trust were not similarly protective for girls. For boys, the positive associations between cultural coping with antagonism messages and depressive symptoms were stronger in the context of high anger and alienation in the mother-adolescent relationship. Findings suggest that qualities of the mother-adolescent relationship, in which preparation for bias messages are delivered, are important for understanding the mental health adjustment of African American adolescents.
LaChapelle, D L; Hadjistavropoulos, H D; McCreary, D R; Asmundson, G J
2001-01-01
Coping is a cyclical process in which an individual evaluates stressful events, chooses and implements coping strategies, re-evaluates the outcome of the coping effort and modifies the strategy if necessary. The intent of the present study was to evaluate the extent to which pain-related adjustment (i.e. pain severity, pain interference, negative affect) and perceptions of control are associated with the implementation of particular coping strategies. Participants were 136 patients assessed at an interdisciplinary pain clinic for cervical sprain injuries. As part of a routine assessment, participants completed a questionnaire package regarding background, pain severity, pain interference, negative affect, perceived control and use of particular coping strategies. Results of hierarchical multiple regression analyses revealed that pain interference, after controlling for all other variables, was associated with greater use of less physically demanding strategies (i.e. resting, guarding, asking for assistance, seeking social support and coping self-statements). Negative affect, on the other hand, after controlling for other variables, was associated with reduced use of task persistence. Finally, perceived control, independent of other variables, was associated with greater use of cognitive and social coping strategies (i.e. asking for assistance, seeking social support and coping self-statements). The results of the study shed light on the complex relationship between use of particular coping strategies and situational variables of pain-related adjustment and perceived control. Implications for clinicians who assist patients via implementation or modification of particular coping techniques are discussed.
Rudiger, Jonathan A; Winstead, Barbara A
2013-09-01
Talk about physical appearance and body image is common among young women. We investigated how body talk (negative, positive/self-accepting, and co-ruminative) is related to body image, body-related cognitive distortions, disordered eating, psychological adjustment, and friendship quality via hierarchical regression analyses (controlling for social desirability and body mass index). In a sample of young adult women (N=203), negative body talk was, as predicted, negatively related to body satisfaction and self-esteem and positively related to appearance investment, body-related cognitive distortions, disordered eating, and depression, but not friendship quality. Self-accepting/positive body talk was negatively related to body-related cognitive distortions and positively related to body satisfaction, self-esteem, and friendship quality. Body-related co-rumination demonstrated adjustment trade-offs, being related to body-related cognitive distortions, disordered eating, and higher friendship quality. Results indicated no advantage to negative body talk, both individual and relationship benefits from positive/self-accepting body talk, and mixed outcomes for body-related co-rumination.
Yuen, J; Persson, I; Bergkvist, L; Hoover, R; Schairer, C; Adami, H O
1993-07-01
No change of breast cancer mortality has been reported previously after long-term hormone replacement therapy. A conceivable explanation for the apparent discrepancy between incidence and mortality may be selection bias due to lower prevalence of breast cancer in women who receive replacement hormones, compared with nonexposed women. We used a new approach to correct for bias due to this 'healthy drug-user effect,' by adjusting the external, population-based, mortality rates for such cases prevalent during the recruitment period of our cohort. In this cohort of some 23,000 Swedish women, who were prescribed various hormone replacement regimens, breast cancer mortality was analyzed after follow-up to 12 years. External analyses revealed overall standardized mortality ratios for breast cancer rising from 0.71 to 0.81, but not significantly different from unity, after adjustment procedures. In multivariate regression models, excluding prevalent cases in the cohort, women prescribed estradiol, conjugated estrogens, or an estrogen-progestin combination were not at a higher risk relative to those given other and weak estrogens, relative risks being 0.81 and 0.68, respectively. On the basis of the present analytical approach, we conclude that breast cancer mortality does not appear to be changed overall or in subgroups, despite increased incidence.
DuBois, David L; Silverthorn, Naida
2004-06-01
We investigated bias in self-perceptions of competence (relative to parent ratings) for family, school, and peer domains as predictors of adjustment problems among 139 young adolescents over a 1-year period using a prospective design. Regressions examined measures of bias at Time 1 (T1) as predictors of ratings of internalizing and externalizing problems at Time 2 (T2), controlling for T1 adjustment ratings. For the family domain, curvilinear trends were found. Follow-up analyses revealed that for this domain both negative bias (self-perceptions less favorable than parent ratings) and positive bias (self-perceptions more favorable than parent ratings) predicted greater internalizing and externalizing problems as rated by youth, parents, and teachers. For the peer domain, higher scores on the measure of bias predicted greater internalizing and externalizing problems as rated by teachers. These findings are consistent with the view that accuracy in self-perceptions of competence can have important implications across multiple domains of development.
Creativity and Regression on the Rorschach.
ERIC Educational Resources Information Center
Lazar, Billie S.
This paper describes the results of a study to further test and replicate previous studies partially supporting Kris's view that creativity is a regression in the service of the ego. For this sample of 42 female art and business college students, it was predicted that (1) highly creative Ss (measured by the Torrance Tests) produce more, and more…
Locating the Extrema of Fungible Regression Weights
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2009-01-01
In a multiple regression analysis with three or more predictors, every set of alternate weights belongs to an infinite class of "fungible weights" (Waller, Psychometrica, "in press") that yields identical "SSE" (sum of squared errors) and R[superscript 2] values. When the R[superscript 2] using the alternate weights is a fixed value, fungible…
Assessing risk factors for periodontitis using regression
NASA Astrophysics Data System (ADS)
Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa
2013-10-01
Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.
Predicting Social Trust with Binary Logistic Regression
ERIC Educational Resources Information Center
Adwere-Boamah, Joseph; Hufstedler, Shirley
2015-01-01
This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…
Invariant Ordering of Item-Total Regressions
ERIC Educational Resources Information Center
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas
2011-01-01
A new observable consequence of the property of invariant item ordering is presented, which holds under Mokken's double monotonicity model for dichotomous data. The observable consequence is an invariant ordering of the item-total regressions. Kendall's measure of concordance "W" and a weighted version of this measure are proposed as measures for…
Superquantile Regression: Theory, Algorithms, and Applications
2014-12-01
buffered reliability, uncertainty quantification, surrogate estimation, superquantile tracking, dualization of risk 147 Unclassified Unclassified...series of numerical examples that show some of the ap- plication of superquantile regression, such as superquantile tracking and surrogate estimation...dissertation by surrogate estimation. It usually occurs when the explanatory random variable is beyond our direct control, but the dependence between the
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Regression Segmentation for M³ Spinal Images.
Wang, Zhijie; Zhen, Xiantong; Tay, KengYeow; Osman, Said; Romano, Walter; Li, Shuo
2015-08-01
Clinical routine often requires to analyze spinal images of multiple anatomic structures in multiple anatomic planes from multiple imaging modalities (M(3)). Unfortunately, existing methods for segmenting spinal images are still limited to one specific structure, in one specific plane or from one specific modality (S(3)). In this paper, we propose a novel approach, Regression Segmentation, that is for the first time able to segment M(3) spinal images in one single unified framework. This approach formulates the segmentation task innovatively as a boundary regression problem: modeling a highly nonlinear mapping function from substantially diverse M(3) images directly to desired object boundaries. Leveraging the advancement of sparse kernel machines, regression segmentation is fulfilled by a multi-dimensional support vector regressor (MSVR) which operates in an implicit, high dimensional feature space where M(3) diversity and specificity can be systematically categorized, extracted, and handled. The proposed regression segmentation approach was thoroughly tested on images from 113 clinical subjects including both disc and vertebral structures, in both sagittal and axial planes, and from both MRI and CT modalities. The overall result reaches a high dice similarity index (DSI) 0.912 and a low boundary distance (BD) 0.928 mm. With our unified and expendable framework, an efficient clinical tool for M(3) spinal image segmentation can be easily achieved, and will substantially benefit the diagnosis and treatment of spinal diseases.
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
Revisiting Regression in Autism: Heller's "Dementia Infantilis"
ERIC Educational Resources Information Center
Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin
2013-01-01
Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…
A Spline Regression Model for Latent Variables
ERIC Educational Resources Information Center
Harring, Jeffrey R.
2014-01-01
Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…
Prediction of dynamical systems by symbolic regression
NASA Astrophysics Data System (ADS)
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
Prediction of dynamical systems by symbolic regression.
Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R
2016-07-01
We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Assumptions of Multiple Regression: Correcting Two Misconceptions
ERIC Educational Resources Information Center
Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason
2013-01-01
In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…
The Shadow Side of Regressive Groups.
ERIC Educational Resources Information Center
McClure, Bud A.
1994-01-01
Contends that inability of groups to address conflict, encourage dissenting views, and face their negative characteristics can result in destructive behavior toward others that remains largely outside awareness of individual members. Examines regressive group characteristics; behavior of United States during Persian Gulf War is used to highlight…
Moving the Bar: Transformations in Linear Regression.
ERIC Educational Resources Information Center
Miranda, Janet
The assumption that is most important to the hypothesis testing procedure of multiple linear regression is the assumption that the residuals are normally distributed, but this assumption is not always tenable given the realities of some data sets. When normal distribution of the residuals is not met, an alternative method can be initiated. As an…
Complementary Log Regression for Sufficient-Cause Modeling of Epidemiologic Data.
Lin, Jui-Hsiang; Lee, Wen-Chung
2016-12-13
The logistic regression model is the workhorse of epidemiological data analysis. The model helps to clarify the relationship between multiple exposures and a binary outcome. Logistic regression analysis is readily implemented using existing statistical software, and this has contributed to it becoming a routine procedure for epidemiologists. In this paper, the authors focus on a causal model which has recently received much attention from the epidemiologic community, namely, the sufficient-component cause model (causal-pie model). The authors show that the sufficient-component cause model is associated with a particular 'link' function: the complementary log link. In a complementary log regression, the exponentiated coefficient of a main-effect term corresponds to an adjusted 'peril ratio', and the coefficient of a cross-product term can be used directly to test for causal mechanistic interaction (sufficient-cause interaction). The authors provide detailed instructions on how to perform a complementary log regression using existing statistical software and use three datasets to illustrate the methodology. Complementary log regression is the model of choice for sufficient-cause analysis of binary outcomes. Its implementation is as easy as conventional logistic regression.
Adjustable extender for instrument module
Sevec, J.B.; Stein, A.D.
1975-11-01
A blank extender module used to mount an instrument module in front of its console for repair or test purposes has been equipped with a rotatable mount and means for locking the mount at various angles of rotation for easy accessibility. The rotatable mount includes a horizontal conduit supported by bearings within the blank module. The conduit is spring-biased in a retracted position within the blank module and in this position a small gear mounted on the conduit periphery is locked by a fixed pawl. The conduit and instrument mount can be pulled into an extended position with the gear clearing the pawl to permit rotation and adjustment of the instrument.
Sigurdson, J F; Wallander, J; Sund, A M
2014-10-01
The aim was to examine prospectively associations between bullying involvement at 14-15 years of age and self-reported general health and psychosocial adjustment in young adulthood, at 26-27 years of age. A large representative sample (N=2,464) was recruited and assessed in two counties in Mid-Norway in 1998 (T1) and 1999/2000 (T2) when the respondents had a mean age of 13.7 and 14.9, respectively, leading to classification as being bullied, bully-victim, being aggressive toward others or non-involved. Information about general health and psychosocial adjustment was gathered at a follow-up in 2012 (T4) (N=1,266) with a respondent mean age of 27.2. Logistic regression and ANOVA analyses showed that groups involved in bullying of any type in adolescence had increased risk for lower education as young adults compared to those non-involved. The group aggressive toward others also had a higher risk of being unemployed and receiving any kind of social help. Compared with the non-involved, those being bullied and bully-victims had increased risk of poor general health and high levels of pain. Bully-victims and those aggressive toward others during adolescence subsequently had increased risk of tobacco use and lower job functioning than non-involved. Further, those being bullied and aggressive toward others had increased risk of illegal drug use. Relations to live-in spouse/partner were poorer among those being bullied. Involvement in bullying, either as victim or perpetrator, has significant social costs even 12 years after the bullying experience. Accordingly, it will be important to provide early intervention for those involved in bullying in adolescence.
A general framework for the use of logistic regression models in meta-analysis.
Simmonds, Mark C; Higgins, Julian Pt
2016-12-01
Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy.
Teaching Practices and the Promotion of Achievement and Adjustment in First Grade
ERIC Educational Resources Information Center
Perry, Kathryn E.; Donohue, Kathleen M.; Weinstein, Rhona S.
2007-01-01
The effects of teacher practices in promoting student academic achievement, behavioral adjustment, and feelings of competence were investigated in a prospective study of 257 children in 14 first grade classrooms. Using hierarchical linear modeling and regression techniques, observed teaching practices in the fall were explored as predictors of…
Parenting Styles, Drug Use, and Children's Adjustment in Families of Young Adults.
ERIC Educational Resources Information Center
Kandel, Denise B.
1990-01-01
Examined childrearing practices and child adjustment in longitudinal cohort of young adults for whom detailed drug histories were available. Maternal drug use retained statistically significant unique effect on child control problems when other parental variables were entered simultaneously in multiple regression equation and was one of two…
A gigawatt level repetitive rate adjustable magnetic pulse compressor
NASA Astrophysics Data System (ADS)
Li, Song; Gao, Jing-Ming; Yang, Han-Wu; Qian, Bao-Liang; Li, Ze-Xin
2015-08-01
In this paper, a gigawatt level repetitive rate adjustable magnetic pulse compressor is investigated both numerically and experimentally. The device has advantages of high power level, high repetitive rate achievability, and long lifetime reliability. Importantly, dominate parameters including the saturation time, the peak voltage, and even the compression ratio can be potentially adjusted continuously and reliably, which significantly expands the applicable area of the device and generators based on it. Specifically, a two-stage adjustable magnetic pulse compressor, utilized for charging the pulse forming network of a high power pulse generator, is designed with different compression ratios of 25 and 18 through an optimized design process. Equivalent circuit analysis shows that the modification of compression ratio can be achieved by just changing the turn number of the winding. At the same time, increasing inductance of the grounded inductor will decrease the peak voltage and delay the charging process. Based on these analyses, an adjustable compressor was built and studied experimentally in both the single shot mode and repetitive rate mode. Pulses with peak voltage of 60 kV and energy per pulse of 360 J were obtained in the experiment. The rise times of the pulses were compressed from 25 μs to 1 μs and from 18 μs to 1 μs, respectively, at repetitive rate of 20 Hz with good repeatability. Experimental results show reasonable agreement with analyses.
Confidence interval of difference of proportions in logistic regression in presence of covariates.
Reeve, Russell
2016-03-16
Comparison of treatment differences in incidence rates is an important objective of many clinical trials. However, often the proportion is affected by covariates, and the adjustment of the predicted proportion is made using logistic regression. It is desirable to estimate the treatment differences in proportions adjusting for the covariates, similarly to the comparison of adjusted means in analysis of variance. Because of the correlation between the point estimates in the different treatment groups, the standard methods for constructing confidence intervals are inadequate. The problem is more difficult in the binary case, as the comparison is not uniquely defined, and the sampling distribution more difficult to analyze. Four procedures for analyzing the data are presented, which expand upon existing methods and generalize the link function. It is shown that, among the four methods studied, the resampling method based on the exact distribution function yields a coverage rate closest to the nominal.
Embedded Sensors for Measuring Surface Regression
NASA Technical Reports Server (NTRS)
Gramer, Daniel J.; Taagen, Thomas J.; Vermaak, Anton G.
2006-01-01
The development and evaluation of new hybrid and solid rocket motors requires accurate characterization of the propellant surface regression as a function of key operational parameters. These characteristics establish the propellant flow rate and are prime design drivers affecting the propulsion system geometry, size, and overall performance. There is a similar need for the development of advanced ablative materials, and the use of conventional ablatives exposed to new operational environments. The Miniature Surface Regression Sensor (MSRS) was developed to serve these applications. It is designed to be cast or embedded in the material of interest and regresses along with it. During this process, the resistance of the sensor is related to its instantaneous length, allowing the real-time thickness of the host material to be established. The time derivative of this data reveals the instantaneous surface regression rate. The MSRS could also be adapted to perform similar measurements for a variety of other host materials when it is desired to monitor thicknesses and/or regression rate for purposes of safety, operational control, or research. For example, the sensor could be used to monitor the thicknesses of brake linings or racecar tires and indicate when they need to be replaced. At the time of this reporting, over 200 of these sensors have been installed into a variety of host materials. An MSRS can be made in either of two configurations, denoted ladder and continuous (see Figure 1). A ladder MSRS includes two highly electrically conductive legs, across which narrow strips of electrically resistive material are placed at small increments of length. These strips resemble the rungs of a ladder and are electrically equivalent to many tiny resistors connected in parallel. A substrate material provides structural support for the legs and rungs. The instantaneous sensor resistance is read by an external signal conditioner via wires attached to the conductive legs on the
Logistic models--an odd(s) kind of regression.
Jupiter, Daniel C
2013-01-01
The logistic regression model bears some similarity to the multivariable linear regression with which we are familiar. However, the differences are great enough to warrant a discussion of the need for and interpretation of logistic regression.
Individual Parental Adjustment Moderates the Relationship Between Marital and Coparenting Quality
Talbot, Jean A.; McHale, James P.
2010-01-01
Contemporary family research studies have devoted surprisingly little effort to elucidating the interplay between adults’ individual adjustment and the dynamics of their coparental relationship. In this study, we assessed two particularly relevant “trait” variables, parental flexibility and self-control, and traced links between these characteristics and the nature of the coparents’ interactions together with their infants. It was hypothesized that parental flexibility and self-control would not only explain significant variance in coparenting quality, but also act as moderators attenuating anticipated relationships between marital functioning and coparental process. Participants were 50 heterosexual, married couples and their 12-month-old infants. Multiple regression analyses indicated that even after controlling for marital quality, paternal flexibility and maternal self-control continued to make independent contributions to coparenting harmony. As anticipated, paternal flexibility attenuated the association between marital quality and coparenting negativity. Contrary to predictions, maternal flexibility and self-control did not dampen, but actually heightened the extent to which coparenting harmony declined in the face of lower marital quality. PMID:21127730
Adjusting the Contour of Reflector Panels
NASA Technical Reports Server (NTRS)
Palmer, W. B.; Giebler, M. M.
1984-01-01
Postfabrication adjustment of contour of panels for reflector, such as parabolic reflector for radio antennas, possible with simple mechanism consisting of threaded stud, two nuts, and flexure. Contours adjusted manually.
48 CFR 1450.103 - Contract adjustments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract adjustments. 1450.103 Section 1450.103 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACT... Contract adjustments....
First Year Adjustment in the Secondary School.
ERIC Educational Resources Information Center
Loosemore, Jean Ann
1978-01-01
This study investigated the relationship between adjustment to secondary school and 17 cognitive and noncognitive variables, including intelligence (verbal and nonverbal reasoning), academic achievement, extraversion-introversion, stable/unstable, social adjustment, endeavor, age, sex, and school form. (CP)
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
Use of Multiple Correlation Analysis and Multiple Regression Analysis.
ERIC Educational Resources Information Center
Huberty, Carl J.; Petoskey, Martha D.
1999-01-01
Distinguishes between multiple correlation and multiple regression analysis. Illustrates suggested information reporting methods and reviews the use of regression methods when dealing with problems of missing data. (SK)
49 CFR 393.53 - Automatic brake adjusters and brake adjustment indicators.
Code of Federal Regulations, 2013 CFR
2013-10-01
... brake adjustment indicators. (a) Automatic brake adjusters (hydraulic brake systems). Each commercial motor vehicle manufactured on or after October 20, 1993, and equipped with a hydraulic brake...
Meta-Analyses of the 5-HTTLPR Polymorphisms and Post-Traumatic Stress Disorder
Navarro-Mateu, Fernando; Escámez, Teresa; Koenen, Karestan C.; Alonso, Jordi; Sánchez-Meca, Julio
2013-01-01
Objective To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases Methods Data Sources Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK), PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network) up until December 2011. Study Selection: Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. Data Extraction: Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. Statistical analysis: A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L') and the recessive model (SS/L+ and S'S'/L'+), was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I2 index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. Results 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score) influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. Conclusions Current evidence does not support a direct effect of 5-HTTLPR polymorphisms on PTSD
Lasso adjustments of treatment effect estimates in randomized experiments
Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S.; Yu, Bin
2016-01-01
We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman–Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
NASA Technical Reports Server (NTRS)
Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.
1995-01-01
This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
24 CFR 5.611 - Adjusted income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Adjusted income. 5.611 Section 5... Serving Persons with Disabilities: Family Income and Family Payment; Occupancy Requirements for Section 8 Project-Based Assistance Family Income § 5.611 Adjusted income. Adjusted income means annual income...
24 CFR 5.611 - Adjusted income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Adjusted income. 5.611 Section 5... Serving Persons with Disabilities: Family Income and Family Payment; Occupancy Requirements for Section 8 Project-Based Assistance Family Income § 5.611 Adjusted income. Adjusted income means annual income...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 1780.80 - Inflation adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Inflation adjustments. 1780.80 Section 1780.80... DEVELOPMENT RULES OF PRACTICE AND PROCEDURE RULES OF PRACTICE AND PROCEDURE Civil Money Penalty Inflation Adjustments § 1780.80 Inflation adjustments. The maximum amount of each civil money penalty within...
12 CFR 1780.80 - Inflation adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Inflation adjustments. 1780.80 Section 1780.80... DEVELOPMENT RULES OF PRACTICE AND PROCEDURE RULES OF PRACTICE AND PROCEDURE Civil Money Penalty Inflation Adjustments § 1780.80 Inflation adjustments. The maximum amount of each civil money penalty within...
34 CFR 36.2 - Penalty adjustment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Penalty adjustment. 36.2 Section 36.2 Education Office of the Secretary, Department of Education ADJUSTMENT OF CIVIL MONETARY PENALTIES FOR INFLATION § 36.2..., Section 36.2—Civil Monetary Penalty Inflation Adjustments Statute Description New maximum (and minimum,...
34 CFR 36.2 - Penalty adjustment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Penalty adjustment. 36.2 Section 36.2 Education Office of the Secretary, Department of Education ADJUSTMENT OF CIVIL MONETARY PENALTIES FOR INFLATION § 36.2..., Section 36.2—Civil Monetary Penalty Inflation Adjustments Statute Description New maximum (and minimum,...
Model selection for logistic regression models
NASA Astrophysics Data System (ADS)
Duller, Christine
2012-09-01
Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.
Modeling confounding by half-sibling regression
Schölkopf, Bernhard; Hogg, David W.; Wang, Dun; Foreman-Mackey, Daniel; Janzing, Dominik; Simon-Gabriel, Carl-Johann; Peters, Jonas
2016-01-01
We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as “half-sibling regression,” is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application. PMID:27382154
Modeling confounding by half-sibling regression.
Schölkopf, Bernhard; Hogg, David W; Wang, Dun; Foreman-Mackey, Daniel; Janzing, Dominik; Simon-Gabriel, Carl-Johann; Peters, Jonas
2016-07-05
We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as "half-sibling regression," is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application.
Satellite rainfall retrieval by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.
1986-01-01
The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.
Variable Selection in Semiparametric Regression Modeling.
Li, Runze; Liang, Hua
2008-01-01
In this paper, we are concerned with how to select significant variables in semiparametric modeling. Variable selection for semiparametric regression models consists of two components: model selection for nonparametric components and select significant variables for parametric portion. Thus, it is much more challenging than that for parametric models such as linear models and generalized linear models because traditional variable selection procedures including stepwise regression and the best subset selection require model selection to nonparametric components for each submodel. This leads to very heavy computational burden. In this paper, we propose a class of variable selection procedures for semiparametric regression models using nonconcave penalized likelihood. The newly proposed procedures are distinguished from the traditional ones in that they delete insignificant variables and estimate the coefficients of significant variables simultaneously. This allows us to establish the sampling properties of the resulting estimate. We first establish the rate of convergence of the resulting estimate. With proper choices of penalty functions and regularization parameters, we then establish the asymptotic normality of the resulting estimate, and further demonstrate that the proposed procedures perform as well as an oracle procedure. Semiparametric generalized likelihood ratio test is proposed to select significant variables in the nonparametric component. We investigate the asymptotic behavior of the proposed test and demonstrate its limiting null distribution follows a chi-squared distribution, which is independent of the nuisance parameters. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedures.
Height and age adjustment for cross sectional studies of lung function in children aged 6-11 years.
Chinn, S; Rona, R J
1992-01-01
BACKGROUND: No standard exists for the adjustment of lung function for height and age in children. Multiple regression should not be used on untransformed data because, for example, forced expiratory volume (FEV1), though normally distributed for height, age, and sex, has increasing standard deviation. A solution to the conflict is proposed. METHODS: Spirometry on representative samples of children aged 6.5 to 11.99 years in primary schools in England. After exclusion of children who did not provide two repeatable blows 910 white English boys and 722 girls had data on FEV1 and height. Means and standard deviations of FEV1 divided by height were plotted to determine whether logarithmic transformation of FEV1 was appropriate. Multiple regression was used to give predicted FEV1 for height and age on the transformed scale; back transformation gave predicted values in litres. Other lung function measures were analysed, and data on inner city children, children from ethnic minority groups, and Scottish children were described. RESULTS: After logarithmic (ln) transformation of FEV1 standard deviation was constant. The ratios of actual and predicted values of FEV1 were normally distributed in boys and girls. From the means and standard deviations of these distributions, and the predicted values, centiles and standard deviation scores can be calculated. CONCLUSION: The method described is valid because the assumption of stable variance for multiple regression was satisfied on the log scale and the variation of ratios of actual to predicted values on the original scale was well described by a normal distribution. The adoption of the method will lead to uniformity and greater ease of comparison of research findings. PMID:1440464
ERIC Educational Resources Information Center
Proper, Elizabeth C.; And Others
This segment of the national evaluation study of the Follow Through Planned Variation Model discusses findings of analyses of achievement test data which have been adjusted to take into consideration the preschool experience of children in three Follow Through cohorts. These analyses serve as a supplement to analyses presented in Volume IV-A of…
Quantile regression modeling for Malaysian automobile insurance premium data
NASA Astrophysics Data System (ADS)
Fuzi, Mohd Fadzli Mohd; Ismail, Noriszura; Jemain, Abd Aziz
2015-09-01
Quantile regression is a robust regression to outliers compared to mean regression models. Traditional mean regression models like Generalized Linear Model (GLM) are not able to capture the entire distribution of premium data. In this paper we demonstrate how a quantile regression approach can be used to model net premium data to study the effects of change in the estimates of regression parameters (rating classes) on the magnitude of response variable (pure premium). We then compare the results of quantile regression model with Gamma regression model. The results from quantile regression show that some rating classes increase as quantile increases and some decrease with decreasing quantile. Further, we found that the confidence interval of median regression (τ = O.5) is always smaller than Gamma regression in all risk factors.
Spatial quantile regression using INLA with applications to childhood overweight in Malawi.
Mtambo, Owen P L; Masangwi, Salule J; Kazembe, Lawrence N M
2015-04-01
Analyses of childhood overweight have mainly used mean regression. However, using quantile regression is more appropriate as it provides flexibility to analyse the determinants of overweight corresponding to quantiles of interest. The main objective of this study was to fit a Bayesian additive quantile regression model with structured spatial effects for childhood overweight in Malawi using the 2010 Malawi DHS data. Inference was fully Bayesian using R-INLA package. The significant determinants of childhood overweight ranged from socio-demographic factors such as type of residence to child and maternal factors such as child age and maternal BMI. We observed significant positive structured spatial effects on childhood overweight in some districts of Malawi. We recommended that the childhood malnutrition policy makers should consider timely interventions based on risk factors as identified in this paper including spatial targets of interventions.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
NASA Technical Reports Server (NTRS)
Taylor, G. R.
1972-01-01
Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.
Bennett, David S; Bendersky, Margaret; Lewis, Michael
2002-09-01
The authors examined 223 children at age 4 years for the effects of prenatal cocaine exposure, exposure to other substances, maternal and environmental risk factors, and neonatal medical problems on IQ, externalizing problems, and internalizing problems. Regression analyses showed that maternal verbal IQ and low environmental risk predicted child IQ. Cocaine exposure negatively predicted children's overall IQ and verbal reasoning scores, but only for boys. Cocaine exposure also predicted poorer short-term memory. Maternal harsh discipline, maternal depressive symptoms, and increased environmental risk predicted externalizing problems. In contrast, only maternal depressive symptoms predicted internalizing problems. These findings indicate that early exposure to substances is largely unrelated to subsequent IQ or adjustment, particularly for girls.
Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.
Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao
2016-04-01
To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.
ERIC Educational Resources Information Center
Ozechowski, Timothy J.; Turner, Charles W.; Hops, Hyman
2007-01-01
This article demonstrates the use of mixed-effects logistic regression (MLR) for conducting sequential analyses of binary observational data. MLR is a special case of the mixed-effects logit modeling framework, which may be applied to multicategorical observational data. The MLR approach is motivated in part by G. A. Dagne, G. W. Howe, C. H.…
Multiple linear regression models are often used to predict levels of fecal indicator bacteria (FIB) in recreational swimming waters based on independent variables (IVs) such as meteorologic, hydrodynamic, and water-quality measures. The IVs used for these analyses are traditiona...
2010-01-01
Objectives To evaluate the use and reporting of adjusted analysis in randomised controlled trials (RCTs) and compare the quality of reporting before and after the revision of the CONSORT Statement in 2001. Design Comparison of two cross sectional samples of published articles. Data Sources Journal articles indexed on PubMed in December 2000 and December 2006. Study Selection Parallel group RCTs with a full publication carried out in humans and published in English Main outcome measures Proportion of articles reported adjusted analysis; use of adjusted analysis; the reason for adjustment; the method of adjustment and the reporting of adjusted analysis results in the main text and abstract. Results In both cohorts, 25% of studies reported adjusted analysis (84/355 in 2000 vs 113/422 in 2006). Compared with articles reporting only unadjusted analyses, articles that reported adjusted analyses were more likely to specify primary outcomes, involve multiple centers, perform stratified randomization, be published in general medical journals, and recruit larger sample sizes. In both years a minority of articles explained why and how covariates were selected for adjustment (20% to 30%). Almost all articles specified the statistical methods used for adjustment (99% in 2000 vs 100% in 2006) but only 5% and 10%, respectively, reported both adjusted and unadjusted results as recommended in the CONSORT guidelines. Conclusion There was no evidence of change in the reporting of adjusted analysis results five years after the revision of the CONSORT Statement and only a few articles adhered fully to the CONSORT recommendations. PMID:20482769
Welsh, Janet A; Olson, Jonathan; Perkins, Daniel F; Travis, Wendy J; Ormsby, LaJuana
2015-09-01
This study examined the relations among three different types of naturally occurring social support (from romantic partners, friends and neighbors, and unit leaders) and three indices of service member well-being (self reports of depressive symptoms, satisfaction with military life, and perceptions of unit readiness) for service members who did and did not report negative experiences associated with military deployment. Data were drawn from the 2011 Community Assessment completed anonymously by more than 63,000 USAF personnel. Regression analyses revealed that higher levels of social support was associated with better outcomes regardless of negative deployment experiences. Evidence of moderation was also noted, with all forms of social support moderating the impact of negative deployment experiences on depressive symptoms and support from unit leaders moderating the impact of negative deployment experience on satisfaction with military life. No moderation was found for perceptions of unit readiness. Subgroup analyses revealed slightly different patterns for male and female service members, with support providing fewer moderation effects for women. These findings may have value for military leaders and mental health professionals working to harness the power of naturally occurring relationships to maximize the positive adjustment of service members and their families. Implications for practices related to re-integration of post-deployment military personnel are discussed.
Synthesizing US Colonial Climate: Available Data and a "Proxy Adjustment" Method
NASA Astrophysics Data System (ADS)
Zalzal, K. S.; Munoz-Hernandez, A.; Arrigo, J. S.
2008-12-01
Climate and its variability is a primary driver of hydrologic systems. A paucity of instrumental data makes reconstructing seventeenth- and eighteenth-century climatic conditions along the Northeast corridor difficult, yet this information is necessary if we are to understand the conditions, changes and interactions society had with hydrosystems during this first period of permanent European settlement. For this period (approx. 1600- 1800) there are instrumental records for some regions such as annual temperature and precipitation data for Philadelphia beginning in 1738; Cambridge, Mass., from 1747-1776; and temperature for New Haven, Conn., from 1780 to 1800. There are also paleorecords, including tree-rings analyses and sediment core examinations of pollen and overwash deposits, and historical accounts of extreme weather events. Our analyses of these data show that correlating even the available data is less than straightforward. To produce a "best track" climate record, we introduce a new method of "paleoadjustment" as a means to characterize climate statistical properties as opposed to a strict reconstruction. Combining the instrumented record with the paleorecord, we estimated two sets of climate forcings to use in colonial hydrology study. The first utilized a recent instrumented record (1817-1917) from Baltimore, Md, statistically adjusted in 20-year windows to match trends in the paleorecords and anecdotal evidence from the Middle Colonies and Chesapeake Bay region. The second was a regression reconstruction for New England using climate indices developed from journal records and the Cambridge, Mass., instrumental record. The two climate reconstructions were used to compute the annual potential water yield over the 200-year period of interest. A comparison of these results allowed us to make preliminary conclusions regarding the effect of climate on hydrology during the colonial period. We contend that an understanding of historical hydrology will improve
Support Vector Machine algorithm for regression and classification
Yu, Chenggang; Zavaljevski, Nela
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. A decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by the capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.
Survival analysis of cervical cancer using stratified Cox regression
NASA Astrophysics Data System (ADS)
Purnami, S. W.; Inayati, K. D.; Sari, N. W. Wulan; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
Cervical cancer is one of the mostly widely cancer cause of the women death in the world including Indonesia. Most cervical cancer patients come to the hospital already in an advanced stadium. As a result, the treatment of cervical cancer becomes more difficult and even can increase the death's risk. One of parameter that can be used to assess successfully of treatment is the probability of survival. This study raises the issue of cervical cancer survival patients at Dr. Soetomo Hospital using stratified Cox regression based on six factors such as age, stadium, treatment initiation, companion disease, complication, and anemia. Stratified Cox model is used because there is one independent variable that does not satisfy the proportional hazards assumption that is stadium. The results of the stratified Cox model show that the complication variable is significant factor which influent survival probability of cervical cancer patient. The obtained hazard ratio is 7.35. It means that cervical cancer patient who has complication is at risk of dying 7.35 times greater than patient who did not has complication. While the adjusted survival curves showed that stadium IV had the lowest probability of survival.
Mission assurance increased with regression testing
NASA Astrophysics Data System (ADS)
Lang, R.; Spezio, M.
Knowing what to test is an important attribute in any testing campaign, especially when it has to be right or the mission could be in jeopardy. The New Horizons mission, developed and operated by the John Hopkins University Applied Physics Laboratory, received a planned major upgrade to their Mission Operations and Control (MOC) ground system architecture. Early in the mission planning it was recognized that the ground system platform would require an upgrade to assure continued support of technology used for spacecraft operations. With the planned update to the six year operational ground architecture from Solaris 8 to Solaris 10, it was critical that the new architecture maintain critical operations and control functions. The New Horizons spacecraft is heading to its historic rendezvous with Pluto in July 2015 and then proceeding into the Kuiper Belt. This paper discusses the Independent Software Acceptance Testing (ISAT) Regression test campaign that played a critical role to assure the continued success of the New Horizons mission. The New Horizons ISAT process was designed to assure all the requirements were being met for the ground software functions developed to support the mission objectives. The ISAT team developed a test plan with a series of test case designs. The test objectives were to verify that the software developed from the requirements functioned as expected in the operational environment. As the test cases were developed and executed, a regression test suite was identified at the functional level. This regression test suite would serve as a crucial resource in assuring the operational system continued to function as required with such a large scale change being introduced. Some of the New Horizons ground software changes required modifications to the most critical functions of the operational software. Of particular concern was the new MOC architecture (Solaris 10) is Intel based and little endian, and the legacy architecture (Solaris 8) was SPA
1980-10-01
ue asý redo urei byIN.6 t eillof 1.2 inn above Lthe ground surface.te reclin ~ ~ ~ ~ 1 ofeietso pc’i xI I’ AR 36 were’. Iot erpreted , as all[owed...Report FAA-RD-71-99 (November 1971). 25. ’JB’ McCollough and Harold C. True, "Effect of Temperature and Humidity on Aircraft Noise Propagation...34 Federal Aviatior, Administration Report FAA-RD-75-100 (September 1975). 26. Harold C. True, "The Layered Weather Correction for Flyover Noise Testing
Mapping geogenic radon potential by regression kriging.
Pásztor, László; Szabó, Katalin Zsuzsanna; Szatmári, Gábor; Laborczi, Annamária; Horváth, Ákos
2016-02-15
Radon ((222)Rn) gas is produced in the radioactive decay chain of uranium ((238)U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly.
Monthly streamflow forecasting using Gaussian Process Regression
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Wang, Dingbao; Xu, Xianli
2014-04-01
Streamflow forecasting plays a critical role in nearly all aspects of water resources planning and management. In this work, Gaussian Process Regression (GPR), an effective kernel-based machine learning algorithm, is applied to probabilistic streamflow forecasting. GPR is built on Gaussian process, which is a stochastic process that generalizes multivariate Gaussian distribution to infinite-dimensional space such that distributions over function values can be defined. The GPR algorithm provides a tractable and flexible hierarchical Bayesian framework for inferring the posterior distribution of streamflows. The prediction skill of the algorithm is tested for one-month-ahead prediction using the MOPEX database, which includes long-term hydrometeorological time series collected from 438 basins across the U.S. from 1948 to 2003. Comparisons with linear regression and artificial neural network models indicate that GPR outperforms both regression methods in most cases. The GPR prediction of MOPEX basins is further examined using the Budyko framework, which helps to reveal the close relationships among water-energy partitions, hydrologic similarity, and predictability. Flow regime modification and the resulting loss of predictability have been a major concern in recent years because of climate change and anthropogenic activities. The persistence of streamflow predictability is thus examined by extending the original MOPEX data records to 2012. Results indicate relatively strong persistence of streamflow predictability in the extended period, although the low-predictability basins tend to show more variations. Because many low-predictability basins are located in regions experiencing fast growth of human activities, the significance of sustainable development and water resources management can be even greater for those regions.
Multiple linear regression for isotopic measurements
NASA Astrophysics Data System (ADS)
Garcia Alonso, J. I.
2012-04-01
There are two typical applications of isotopic measurements: the detection of natural variations in isotopic systems and the detection man-made variations using enriched isotopes as indicators. For both type of measurements accurate and precise isotope ratio measurements are required. For the so-called non-traditional stable isotopes, multicollector ICP-MS instruments are usually applied. In many cases, chemical separation procedures are required before accurate isotope measurements can be performed. The off-line separation of Rb and Sr or Nd and Sm is the classical procedure employed to eliminate isobaric interferences before multicollector ICP-MS measurement of Sr and Nd isotope ratios. Also, this procedure allows matrix separation for precise and accurate Sr and Nd isotope ratios to be obtained. In our laboratory we have evaluated the separation of Rb-Sr and Nd-Sm isobars by liquid chromatography and on-line multicollector ICP-MS detection. The combination of this chromatographic procedure with multiple linear regression of the raw chromatographic data resulted in Sr and Nd isotope ratios with precisions and accuracies typical of off-line sample preparation procedures. On the other hand, methods for the labelling of individual organisms (such as a given plant, fish or animal) are required for population studies. We have developed a dual isotope labelling procedure which can be unique for a given individual, can be inherited in living organisms and it is stable. The detection of the isotopic signature is based also on multiple linear regression. The labelling of fish and its detection in otoliths by Laser Ablation ICP-MS will be discussed using trout and salmon as examples. As a conclusion, isotope measurement procedures based on multiple linear regression can be a viable alternative in multicollector ICP-MS measurements.
Atmospheric tether mission analyses
NASA Technical Reports Server (NTRS)
1996-01-01
NASA is considering the use of tethered satellites to explore regions of the atmosphere inaccessible to spacecraft or high altitude research balloons. This report summarizes the Lockheed Martin Astronautics (LMA) effort for the engineering study team assessment of an Orbiter-based atmospheric tether mission. Lockheed Martin responsibilities included design recommendations for the deployer and tether, as well as tether dynamic analyses for the mission. Three tether configurations were studied including single line, multistrand (Hoytether) and tape designs.
Min-Max Bias Robust Regression.
1987-08-01
2 UL uIImImmIIIEllmlllllllll llEllllhllllEI El 1 .1 25 11111 -.4 ___ . .. . . N ~ . MIN- MAX BIAS ROBUST REGRESSION by R. D. Martin V. J. Yohai R. H...shown than an S-estimate based on a jump-function type p solves the n- max bias problem for the class of NI-estimates with very general scale. This...5, (X() -- .5 and the rin- max estimator approaches the least median of squared residuals estimator introduced by Rousseeuw [J. Am. Statist. Assoc
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Semiparametric regression in capture-recapture modeling.
Gimenez, O; Crainiceanu, C; Barbraud, C; Jenouvrier, S; Morgan, B J T
2006-09-01
Capture-recapture models were developed to estimate survival using data arising from marking and monitoring wild animals over time. Variation in survival may be explained by incorporating relevant covariates. We propose nonparametric and semiparametric regression methods for estimating survival in capture-recapture models. A fully Bayesian approach using Markov chain Monte Carlo simulations was employed to estimate the model parameters. The work is illustrated by a study of Snow petrels, in which survival probabilities are expressed as nonlinear functions of a climate covariate, using data from a 40-year study on marked individuals, nesting at Petrels Island, Terre Adélie.
Learning regulatory programs by threshold SVD regression.
Ma, Xin; Xiao, Luo; Wong, Wing Hung
2014-11-04
We formulate a statistical model for the regulation of global gene expression by multiple regulatory programs and propose a thresholding singular value decomposition (T-SVD) regression method for learning such a model from data. Extensive simulations demonstrate that this method offers improved computational speed and higher sensitivity and specificity over competing approaches. The method is used to analyze microRNA (miRNA) and long noncoding RNA (lncRNA) data from The Cancer Genome Atlas (TCGA) consortium. The analysis yields previously unidentified insights into the combinatorial regulation of gene expression by noncoding RNAs, as well as findings that are supported by evidence from the literature.
Postpartum Regression of a Presumed Cavernous Meningioma
Phang, See Yung; Whitfield, Peter
2016-01-01
Meningiomas are known to be more common in females than males. They are also known in rare cases to grow in size during pregnancy, which can complicate its management. We describe a 31-year-old Caucasian woman who presented with blurring of her vision and diplopia during the third trimester of her pregnancy. Magnetic resonance imaging (MRI) showed a small left cavernous sinus meningioma. The patient was treated conservatively until her uncomplicated delivery. A postpartum MRI scan showed complete regression of the suspected meningioma. Currently the patient is contemplating a further pregnancy. PMID:27066285
An operational GLS model for hydrologic regression
Tasker, Gary D.; Stedinger, J.R.
1989-01-01
Recent Monte Carlo studies have documented the value of generalized least squares (GLS) procedures to estimate empirical relationships between streamflow statistics and physiographic basin characteristics. This paper presents a number of extensions of the GLS method that deal with realities and complexities of regional hydrologic data sets that were not addressed in the simulation studies. These extensions include: (1) a more realistic model of the underlying model errors; (2) smoothed estimates of cross correlation of flows; (3) procedures for including historical flow data; (4) diagnostic statistics describing leverage and influence for GLS regression; and (5) the formulation of a mathematical program for evaluating future gaging activities. ?? 1989.
Inferring gene regression networks with model trees
2010-01-01
Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate
ERIC Educational Resources Information Center
López-López, José Antonio; Botella, Juan; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio
2013-01-01
Since heterogeneity between reliability coefficients is usually found in reliability generalization studies, moderator analyses constitute a crucial step for that meta-analytic approach. In this study, different procedures for conducting mixed-effects meta-regression analyses were compared. Specifically, four transformation methods for the…
Exploration adjustment by ant colonies
2016-01-01
How do animals in groups organize their work? Division of labour, i.e. the process by which individuals within a group choose which tasks to perform, has been extensively studied in social insects. Variability among individuals within a colony seems to underpin both the decision over which tasks to perform and the amount of effort to invest in a task. Studies have focused mainly on discrete tasks, i.e. tasks with a recognizable end. Here, we study the distribution of effort in nest seeking, in the absence of new nest sites. Hence, this task is open-ended and individuals have to decide when to stop searching, even though the task has not been completed. We show that collective search effort declines when colonies inhabit better homes, as a consequence of a reduction in the number of bouts (exploratory events). Furthermore, we show an increase in bout exploration time and a decrease in bout instantaneous speed for colonies inhabiting better homes. The effect of treatment on bout effort is very small; however, we suggest that the organization of work performed within nest searching is achieved both by a process of self-selection of the most hard-working ants and individual effort adjustment. PMID:26909180
Bible, Joe; Beck, James D; Datta, Somnath
2016-06-01
Ignorance of the mechanisms responsible for the availability of information presents an unusual problem for analysts. It is often the case that the availability of information is dependent on the outcome. In the analysis of cluster data we say that a condition for informative cluster size (ICS) exists when the inference drawn from analysis of hypothetical balanced data varies from that of inference drawn on observed data. Much work has been done in order to address the analysis of clustered data with informative cluster size; examples include Inverse Probability Weighting (IPW), Cluster Weighted Generalized Estimating Equations (CWGEE), and Doubly Weighted Generalized Estimating Equations (DWGEE). When cluster size changes with time, i.e., the data set possess temporally varying cluster sizes (TVCS), these methods may produce biased inference for the underlying marginal distribution of interest. We propose a new marginalization that may be appropriate for addressing clustered longitudinal data with TVCS. The principal motivation for our present work is to analyze the periodontal data collected by Beck et al. (1997, Journal of Periodontal Research 6, 497-505). Longitudinal periodontal data often exhibits both ICS and TVCS as the number of teeth possessed by participants at the onset of study is not constant and teeth as well as individuals may be displaced throughout the study.
In praise of ambidexterity: How a continuum of handedness predicts social adjustment.
Denny, Kevin; Zhang, Wen
2017-03-01
This paper estimates the relationship between handedness and social adjustment in children. In addition to binary measures of hand preference, we also use a continuous measure of relative hand skill. Outcomes at ages 7, 11 and 16 are studied. The data used is the British 1958 Birth. Using a partially linear semi-parametric regression estimator, it is shown that non-right-handedness (as hand preference) is associated with poorer social adjustment but this effect weakens as individuals age into their teens. The continuous measure of hand skill has a non-monotonic effect on social adjustment with poorer social adjustment in the tails of the continuum. The results are consistent with a growing body of evidence which shows that it is the consistency or degree of laterality (rather than direction) that is important for many outcomes.
ERIC Educational Resources Information Center
Giannotti, Flavia; Cortesi, Flavia; Cerquiglini, Antonella; Miraglia, Daniela; Vagnoni, Cristina; Sebastiani, Teresa; Bernabei, Paola
2008-01-01
This study investigated sleep of children with autism and developmental regression and the possible relationship with epilepsy and epileptiform abnormalities. Participants were 104 children with autism (70 non-regressed, 34 regressed) and 162 typically developing children (TD). Results suggested that the regressed group had higher incidence of…
Regression Models For Saffron Yields in Iran
NASA Astrophysics Data System (ADS)
S. H, Sanaeinejad; S. N, Hosseini
Saffron is an important crop in social and economical aspects in Khorassan Province (Northeast of Iran). In this research wetried to evaluate trends of saffron yield in recent years and to study the relationship between saffron yield and the climate change. A regression analysis was used to predict saffron yield based on 20 years of yield data in Birjand, Ghaen and Ferdows cities.Climatologically data for the same periods was provided by database of Khorassan Climatology Center. Climatologically data includedtemperature, rainfall, relative humidity and sunshine hours for ModelI, and temperature and rainfall for Model II. The results showed the coefficients of determination for Birjand, Ferdows and Ghaen for Model I were 0.69, 0.50 and 0.81 respectively. Also coefficients of determination for the same cities for model II were 0.53, 0.50 and 0.72 respectively. Multiple regression analysisindicated that among weather variables, temperature was the key parameter for variation ofsaffron yield. It was concluded that increasing temperature at spring was the main cause of declined saffron yield during recent years across the province. Finally, yield trend was predicted for the last 5 years using time series analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Quantile Regression Models for Current Status Data.
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-11-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging.
Double linear regression classification for face recognition
NASA Astrophysics Data System (ADS)
Feng, Qingxiang; Zhu, Qi; Tang, Lin-Lin; Pan, Jeng-Shyang
2015-02-01
A new classifier designed based on linear regression classification (LRC) classifier and simple-fast representation-based classifier (SFR), named double linear regression classification (DLRC) classifier, is proposed for image recognition in this paper. As we all know, the traditional LRC classifier only uses the distance between test image vectors and predicted image vectors of the class subspace for classification. And the SFR classifier uses the test image vectors and the nearest image vectors of the class subspace to classify the test sample. However, the DLRC classifier computes out the predicted image vectors of each class subspace and uses all the predicted vectors to construct a novel robust global space. Then, the DLRC utilizes the novel global space to get the novel predicted vectors of each class for classification. A mass number of experiments on AR face database, JAFFE face database, Yale face database, Extended YaleB face database, and PIE face database are used to evaluate the performance of the proposed classifier. The experimental results show that the proposed classifier achieves better recognition rate than the LRC classifier, SFR classifier, and several other classifiers.
A Gibbs sampler for multivariate linear regression
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-04-01
Kelly described an efficient algorithm, using Gibbs sampling, for performing linear regression in the fairly general case where non-zero measurement errors exist for both the covariates and response variables, where these measurements may be correlated (for the same data point), where the response variable is affected by intrinsic scatter in addition to measurement error, and where the prior distribution of covariates is modelled by a flexible mixture of Gaussians rather than assumed to be uniform. Here, I extend the Kelly algorithm in two ways. First, the procedure is generalized to the case of multiple response variables. Secondly, I describe how to model the prior distribution of covariates using a Dirichlet process, which can be thought of as a Gaussian mixture where the number of mixture components is learned from the data. I present an example of multivariate regression using the extended algorithm, namely fitting scaling relations of the gas mass, temperature, and luminosity of dynamically relaxed galaxy clusters as a function of their mass and redshift. An implementation of the Gibbs sampler in the R language, called LRGS, is provided.
Scientific Progress or Regress in Sports Physiology?
Böning, Dieter
2016-11-01
In modern societies there is strong belief in scientific progress, but, unfortunately, a parallel partial regress occurs because of often avoidable mistakes. Mistakes are mainly forgetting, erroneous theories, errors in experiments and manuscripts, prejudice, selected publication of "positive" results, and fraud. An example of forgetting is that methods introduced decades ago are used without knowing the underlying theories: Basic articles are no longer read or cited. This omission may cause incorrect interpretation of results. For instance, false use of actual base excess instead of standard base excess for calculation of the number of hydrogen ions leaving the muscles raised the idea that an unknown fixed acid is produced in addition to lactic acid during exercise. An erroneous theory led to the conclusion that lactate is not the anion of a strong acid but a buffer. Mistakes occur after incorrect application of a method, after exclusion of unwelcome values, during evaluation of measurements by false calculations, or during preparation of manuscripts. Co-authors, as well as reviewers, do not always carefully read papers before publication. Peer reviewers might be biased against a hypothesis or an author. A general problem is selected publication of positive results. An example of fraud in sports medicine is the presence of doped subjects in groups of investigated athletes. To reduce regress, it is important that investigators search both original and recent articles on a topic and conscientiously examine the data. All co-authors and reviewers should read the text thoroughly and inspect all tables and figures in a manuscript.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
A reconsideration of the concept of regression.
Dowling, A Scott
2004-01-01
Regression has been a useful psychoanalytic concept, linking present mental functioning with past experiences and levels of functioning. The concept originated as an extension of the evolutionary zeitgeist of the day as enunciated by H. Spencer and H. Jackson and applied by Freud to psychological phenomena. The value system implicit in the contrast of evolution/progression vs dissolution/regression has given rise to unfortunate and powerful assumptions of social, cultural, developmental and individual value as embodied in notions of "higher," "lower;" "primitive," "mature," "archaic," and "advanced." The unhelpful results of these assumptions are evident, for example, in attitudes concerning cultural, sexual, and social "correctness, " same-sex object choice, and goals of treatment. An alternative, a continuously constructed, continuously emerging mental life, in analogy to the ever changing, continuous physical body, is suggested. This view retains the fundamentals of psychoanalysis, for example, unconscious mental life, drive, defense, and psychic structure, but stresses a functional, ever changing, present oriented understanding of mental life as contrasted with a static, onion-layered view.
Shape regression for vertebra fracture quantification
NASA Astrophysics Data System (ADS)
Lund, Michael Tillge; de Bruijne, Marleen; Tanko, Laszlo B.; Nielsen, Mads
2005-04-01
Accurate and reliable identification and quantification of vertebral fractures constitute a challenge both in clinical trials and in diagnosis of osteoporosis. Various efforts have been made to develop reliable, objective, and reproducible methods for assessing vertebral fractures, but at present there is no consensus concerning a universally accepted diagnostic definition of vertebral fractures. In this project we want to investigate whether or not it is possible to accurately reconstruct the shape of a normal vertebra, using a neighbouring vertebra as prior information. The reconstructed shape can then be used to develop a novel vertebra fracture measure, by comparing the segmented vertebra shape with its reconstructed normal shape. The vertebrae in lateral x-rays of the lumbar spine were manually annotated by a medical expert. With this dataset we built a shape model, with equidistant point distribution between the four corner points. Based on the shape model, a multiple linear regression model of a normal vertebra shape was developed for each dataset using leave-one-out cross-validation. The reconstructed shape was calculated for each dataset using these regression models. The average prediction error for the annotated shape was on average 3%.
Unification of regression-based methods for the analysis of natural selection.
Morrissey, Michael B; Sakrejda, Krzysztof
2013-07-01
Regression analyses are central to characterization of the form and strength of natural selection in nature. Two common analyses that are currently used to characterize selection are (1) least squares-based approximation of the individual relative fitness surface for the purpose of obtaining quantitatively useful selection gradients, and (2) spline-based estimation of (absolute) fitness functions to obtain flexible inference of the shape of functions by which fitness and phenotype are related. These two sets of methodologies are often implemented in parallel to provide complementary inferences of the form of natural selection. We unify these two analyses, providing a method whereby selection gradients can be obtained for a given observed distribution of phenotype and characterization of a function relating phenotype to fitness. The method allows quantitatively useful selection gradients to be obtained from analyses of selection that adequately model nonnormal distributions of fitness, and provides unification of the two previously separate regression-based fitness analyses. We demonstrate the method by calculating directional and quadratic selection gradients associated with a smooth regression-based generalized additive model of the relationship between neonatal survival and the phenotypic traits of gestation length and birth mass in humans.
Measuring Sojourner Adjustment among American students studying abroad
Pedersen, Eric R.; Neighbors, Clayton; Larimer, Mary E.; Lee, Christine M.
2011-01-01
The literature on “Sojourner Adjustment,” a term expanding on the acculturation concept to apply to groups residing temporarily in foreign environments, suggests that engagement, participation, and temporary integration into the host culture may contribute to less psychological and sociocultural difficulty while abroad. The present study was designed to establish a brief multi-component measure of Sojourner Adjustment (the Sojourner Adjustment Measure; SAM) to be used in work with populations residing temporarily in foreign environments (e.g., international students, foreign aid workers). Using exploratory and confirmatory factor analyses on a sample of 248 American study abroad college students, we established a 24-item measure of Sojourner Adjustment composed of four positive factors (social interaction with host nationals, cultural understanding and participation, language development and use, host culture identification) and two negative factors (social interaction with co-nationals, homesickness/feeling out of place). Preliminary convergent validity was examined through correlations with established measures of acculturation. Further research with the SAM is encouraged to explore the relevance of this measure with other groups of sojourners (e.g., foreign aid workers, international businessmen, military personnel) and to determine how SAM factors relate to psychological well-being, health behaviors, and risk behaviors abroad among these diverse groups. PMID:22125351
Good practice guidelines for the use of statistical regression models in economic evaluations.
Kearns, Ben; Ara, Roberta; Wailoo, Allan; Manca, Andrea; Alava, Monica Hernández; Abrams, Keith; Campbell, Mike
2013-08-01
Decision-analytic models (DAMs) used to evaluate the cost effectiveness of interventions are pivotal sources of evidence used in economic evaluations. Parameter estimates used in the DAMs are often based on the results of a regression analysis, but there is little guidance relating to these. This study had two objectives. The first was to identify the frequency of use of regression models in economic evaluations, the parameters they inform, and the amount of information reported to describe and support the analyses. The second objective was to provide guidance to improve practice in this area, based on the review. The review concentrated on a random sample of economic evaluations submitted to the UK National Institute for Health and Clinical Excellence (NICE) as part of its technology appraisal process. Based on these findings, recommendations for good practice were drafted, together with a checklist for critiquing reporting standards in this area. Based on the results of this review, statistical regression models are in widespread use in DAMs used to support economic evaluations, yet reporting of basic information, such as the sample size used and measures of uncertainty, is limited. Recommendations were formed about how reporting standards could be improved to better meet the needs of decision makers. These recommendations are summarised in a checklist, which may be used by both those conducting regression analyses and those critiquing them, to identify what should be reported when using the results of a regression analysis within a DAM.
Barros, Márcio Vinícius Lins de; Arancibia, Ana Elisa Loyola; Costa, Ana Paula; Bueno, Fernando Brito; Martins, Marcela Aparecida Corrêa; Magalhães, Maria Cláudia; Silva, José Luiz Padilha; Bastos, Marcos de
2016-04-01
Deep venous thrombosis (DVT) management includes prediction rule evaluation to define standard pretest DVT probabilities in symptomatic patients. The aim of this study was to evaluate the incremental usefulness of hormonal therapy to the Wells prediction rules for DVT in women. We studied women undertaking compressive ultrasound scanning for suspected DVT. We adjusted the Wells score for DVT, taking into account the β-coefficients of the logistic regression model. Data discrimination was evaluated by the receiver operating characteristic (ROC) curve. The adjusted score calibration was assessed graphically and by the Hosmer-Lemeshow test. Reclassification tables and the net reclassification index were used for the adjusted score comparison with the Wells score for DVT. We observed 461 women including 103 DVT events. The mean age was 56 years (±21 years). The adjusted logistic regression model included hormonal therapy and six Wells prediction rules for DVT. The adjusted score weights ranged from -4 to 4. Hosmer-Lemeshow test showed a nonsignificant P value (0.69) and the calibration graph showed no differences between the expected and the observed values. The area under the ROC curve was 0.92 [95% confidence interval (CI) 0.90-0.95] for the adjusted model and 0.87 (95% CI 0.84-0.91) for the Wells score for DVT (Delong test, P value < 0.01). Net reclassification index for the adjusted score was 0.22 (95% CI 0.11-0.33, P value < 0.01). Our results suggest an incremental usefulness of hormonal therapy as an independent DVT prediction rule in women compared with the Wells score for DVT. The adjusted score must be evaluated in different populations before clinical use.
Do insurers respond to risk adjustment? A long-term, nationwide analysis from Switzerland.
von Wyl, Viktor; Beck, Konstantin
2016-03-01
Community rating in social health insurance calls for risk adjustment in order to eliminate incentives for risk selection. Swiss risk adjustment is known to be insufficient, and substantial risk selection incentives remain. This study develops five indicators to monitor residual risk selection. Three indicators target activities of conglomerates of insurers (with the same ownership), which steer enrollees into specific carriers based on applicants' risk profiles. As a proxy for their market power, those indicators estimate the amount of premium-, health care cost-, and risk-adjustment transfer variability that is attributable to conglomerates. Two additional indicators, derived from linear regression, describe the amount of residual cost differences between insurers that are not covered by risk adjustment. All indicators measuring conglomerate-based risk selection activities showed increases between 1996 and 2009, paralleling the establishment of new conglomerates. At their maxima in 2009, the indicator values imply that 56% of the net risk adjustment volume, 34% of premium variability, and 51% cost variability in the market were attributable to conglomerates. From 2010 onwards, all indicators decreased, coinciding with a pre-announced risk adjustment reform implemented in 2012. Likewise, the regression-based indicators suggest that the volume and variance of residual cost differences between insurers that are not equaled out by risk adjustment have decreased markedly since 2009 as a result of the latest reform. Our analysis demonstrates that risk-selection, especially by conglomerates, is a real phenomenon in Switzerland. However, insurers seem to have reduced risk selection activities to optimize their losses and gains from the latest risk adjustment reform.
LDEF Satellite Radiation Analyses
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1996-01-01
Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.
Ergonomic evaluation of the Apple Adjustable Keyboard
Tittiranonda, P.; Burastero, S.; Shih, M.; Rempel, D.
1994-05-01
This study presents an evaluation of the Apple Adjustable Keyboard based on subjective preference and observed joint angles during typing. Thirty five keyboard users were asked to use the Apple adjustable keyboard for 7--14 days and rate the various characteristics of the keyboard. Our findings suggest that the most preferred opening angles range from 11--20{degree}. The mean ulnar deviation on the Apple Adjustable keyboard is 11{degree}, compared to 16{degree} on the standard keyboard. The mean extension was decreased from 24{degree} to 16{degree} when using the adjustable keyboard. When asked to subjectively rate the adjustable keyboard in comparison to the standard, the average subject felt that the Apple Adjustable Keyboard was more comfortable and easier to use than the standard flat keyboard.
Efficient Adjustable Reflectivity Smart Window
D. Morgan Tench
2005-12-01
This project addressed the key technical issues for development of an efficient smart window based on reversible electrochemical transfer of silver between a mirror electrode and a localized counter electrode. Effort to provide uniform switching over large areas focused on use of a resistive transparent electrode innerlayer to increase the interelectrode resistance. An effective edge seal was developed in collaboration with adhesive suppliers and an electrochromic device manufacturer. Work to provide a manufacturable counter electrode focused on fabricating a dot matrix electrode without photolithography by electrodeposition of Pt nuclei on inherent active sites on a transparent oxide conductor. An alternative counter electrode based on a conducting polymer and an ionic liquid electrolyte was also investigated. Work in all of these areas was successful. Sputtered large-bandgap oxide innerlayers sandwiched between conductive indium tin oxide (ITO) layers were shown to provide sufficient cross-layer resistance (>300 ohm/cm{sup 2}) without significantly affecting the electrochemical properties of the ITO overlayer. Two edge seal epoxies, one procured from an epoxy manufacturer and one provided by an electrochromic device manufacturer in finished seals, were shown to be effective barriers against oxygen intrusion up to 80 C. The optimum density of nuclei for the dot matrix counter electrode was attained without use of photolithography by electrodeposition from a commercial alkaline platinum plating bath. Silver loss issues for cells with dot matrix electrodes were successfully addressed by purifying the electrolyte and adjusting the cell cycling parameters. More than 30K cycles were demonstrated for a REM cell (30-cm square) with a dot matrix counter electrode. Larger cells (30-cm square) were successfully fabricated but could not be cycled since the nucleation layers (provided by an outside supplier) were defective so that mirror deposits could not be produced.
Three-dimensional adjustment of trilateration data
NASA Technical Reports Server (NTRS)
Sung, L.-Y.; Jackson, D. D.
1985-01-01
The three-dimensional locations of the monuments in the USGS Hollister trilateration network were adjusted to fit line length observations observed in 1977, using a Bayesian approach, and incorporating prior elevation estimates as data in the adjustment procedure. No significant discrepancies in the measured line lengths were found, but significant elevation adjustments (up to 1.85 m) were needed to fit the length data.
Scholes-Balog, Kirsty E; Hemphill, Sheryl A; Evans-Whipp, Tracy J; Toumbourou, John W; Patton, George C
2016-02-01
This study aimed to identify distinct developmental trajectories (sub-groups of individuals who showed similar longitudinal patterns) of cannabis use among Australian adolescents, and to examine associations between trajectory group membership and measures of social and behavioural adjustment in young adulthood. Participants (n=852, 53% female) were part of the International Youth Development Study. Latent class growth analysis was used to identify distinct trajectories of cannabis use frequency from average ages 12 to 19, across 6 waves of data. Logistic regression analyses and analyses of covariance were used to examine relationships between trajectory group membership and young adult (average age: 21) adjustment, controlling for a range of covariates. Three trajectories were identified: abstainers (62%), early onset users (11%), and late onset occasional users (27%). The early onset users showed a higher frequency of antisocial behaviour, violence, cannabis use, cannabis-related harms, cigarette use, and alcohol harms, compared to the abstinent group in young adulthood. The late onset occasional users reported a higher frequency of cannabis use, cannabis-related harms, illicit drug use, and alcohol harms, compared to the abstinent group in young adulthood. There were no differences between the trajectory groups on measures of employment, school completion, post-secondary education, income, depression/anxiety, or alcohol use problems. In conclusion, early onset of cannabis use, even at relatively low frequency during adolescence, is associated with poorer adjustment in young adulthood. Prevention and intervention efforts to delay or prevent uptake of cannabis use should be particularly focussed on early adolescence prior to age 12.
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M
2016-05-01
Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.
Multi-adjustable headband. [for headsets
NASA Technical Reports Server (NTRS)
Toole, Pierce C. (Inventor); Chalson, Howard E. (Inventor); Bussey, Walter S. (Inventor)
1988-01-01
This invention relates to a headband for a headset having separate coarse and fine adjustment features. The adjustments may be to the axial distance between at least one earpiece element and a side support. Such adjustment to the axial distance varies the pressure exerted on the head of the user. The present fine adjustment feature may be used while the headset is being worn, thereby permitting a user to optimize the amount of pressure between the contending criteria of comfort and keeping the headset in place on the user's head.