Science.gov

Sample records for adjusted regression coefficient

  1. Wrong Signs in Regression Coefficients

    NASA Technical Reports Server (NTRS)

    McGee, Holly

    1999-01-01

    When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

  2. Standards for Standardized Logistic Regression Coefficients

    ERIC Educational Resources Information Center

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  3. Interpretation of Standardized Regression Coefficients in Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for variables…

  4. Code System to Calculate Correlation & Regression Coefficients.

    1999-11-23

    Version 00 PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model.

  5. Computing confidence intervals for standardized regression coefficients.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2013-12-01

    With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations--for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX--the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients.

  6. Representation of exposures in regression analysis and interpretation of regression coefficients: basic concepts and pitfalls.

    PubMed

    Leffondré, Karen; Jager, Kitty J; Boucquemont, Julie; Stel, Vianda S; Heinze, Georg

    2014-10-01

    Regression models are being used to quantify the effect of an exposure on an outcome, while adjusting for potential confounders. While the type of regression model to be used is determined by the nature of the outcome variable, e.g. linear regression has to be applied for continuous outcome variables, all regression models can handle any kind of exposure variables. However, some fundamentals of representation of the exposure in a regression model and also some potential pitfalls have to be kept in mind in order to obtain meaningful interpretation of results. The objective of this educational paper was to illustrate these fundamentals and pitfalls, using various multiple regression models applied to data from a hypothetical cohort of 3000 patients with chronic kidney disease. In particular, we illustrate how to represent different types of exposure variables (binary, categorical with two or more categories and continuous), and how to interpret the regression coefficients in linear, logistic and Cox models. We also discuss the linearity assumption in these models, and show how wrongly assuming linearity may produce biased results and how flexible modelling using spline functions may provide better estimates.

  7. On the Occurrence of Standardized Regression Coefficients Greater than One.

    ERIC Educational Resources Information Center

    Deegan, John, Jr.

    1978-01-01

    It is demonstrated here that standardized regression coefficients greater than one can legitimately occur. Furthermore, the relationship between the occurrence of such coefficients and the extent of multicollinearity present among the set of predictor variables in an equation is examined. Comments on the interpretation of these coefficients are…

  8. An assessment of coefficient accuracy in linear regression models with spatially varying coefficients

    NASA Astrophysics Data System (ADS)

    Wheeler, David C.; Calder, Catherine A.

    2007-06-01

    The realization in the statistical and geographical sciences that a relationship between an explanatory variable and a response variable in a linear regression model is not always constant across a study area has led to the development of regression models that allow for spatially varying coefficients. Two competing models of this type are geographically weighted regression (GWR) and Bayesian regression models with spatially varying coefficient processes (SVCP). In the application of these spatially varying coefficient models, marginal inference on the regression coefficient spatial processes is typically of primary interest. In light of this fact, there is a need to assess the validity of such marginal inferences, since these inferences may be misleading in the presence of explanatory variable collinearity. In this paper, we present the results of a simulation study designed to evaluate the sensitivity of the spatially varying coefficients in the competing models to various levels of collinearity. The simulation study results show that the Bayesian regression model produces more accurate inferences on the regression coefficients than does GWR. In addition, the Bayesian regression model is overall fairly robust in terms of marginal coefficient inference to moderate levels of collinearity, and degrades less substantially than GWR with strong collinearity.

  9. Penalized spline estimation for functional coefficient regression models

    PubMed Central

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z.

    2011-01-01

    The functional coefficient regression models assume that the regression coefficients vary with some “threshold” variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called “curse of dimensionality” in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application. PMID:21516260

  10. Multicollinearity and correlation among local regression coefficients in geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Wheeler, David; Tiefelsdorf, Michael

    2005-06-01

    Present methodological research on geographically weighted regression (GWR) focuses primarily on extensions of the basic GWR model, while ignoring well-established diagnostics tests commonly used in standard global regression analysis. This paper investigates multicollinearity issues surrounding the local GWR coefficients at a single location and the overall correlation between GWR coefficients associated with two different exogenous variables. Results indicate that the local regression coefficients are potentially collinear even if the underlying exogenous variables in the data generating process are uncorrelated. Based on these findings, applied GWR research should practice caution in substantively interpreting the spatial patterns of local GWR coefficients. An empirical disease-mapping example is used to motivate the GWR multicollinearity problem. Controlled experiments are performed to systematically explore coefficient dependency issues in GWR. These experiments specify global models that use eigenvectors from a spatial link matrix as exogenous variables.

  11. The Importance of Structure Coefficients in Interpreting Regression Research.

    ERIC Educational Resources Information Center

    Heidgerken, Amanda D.

    The paper stresses the importance of consulting beta weights and structure coefficients in the interpretation of regression results. The effects of multilinearity and suppressors and their effects on interpretation of beta weights are discussed. It is concluded that interpretations based on beta weights only can lead the unwary researcher to…

  12. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  13. Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Haghiabi, Amir Hamzeh

    2016-07-01

    In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient ( D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth ( H) and ratio of the mean velocity to shear velocity ( u/ u ∗) were the most effective parameters on the D L .

  14. Bayesian Variable Selection for Multivariate Spatially-Varying Coefficient Regression

    PubMed Central

    Reich, Brian J.; Fuentes, Montserrat; Herring, Amy H.; Evenson, Kelly R.

    2009-01-01

    Summary Physical activity has many well-documented health benefits for cardiovascular fitness and weight control. For pregnant women, the American College of Obstetricians and Gynecologists currently recommends 30 minutes of moderate exercise on most, if not all, days; however, very few pregnant women achieve this level of activity. Traditionally, studies have focused on examining individual or interpersonal factors to identify predictors of physical activity. There is a renewed interest in whether characteristics of the physical environment in which we live and work may also influence physical activity levels. We consider one of the first studies of pregnant women that examines the impact of characteristics of the built environment on physical activity levels. Using a socioecologic framework, we study the associations between physical activity and several factors including personal characteristics, meteorological/air quality variables, and neighborhood characteristics for pregnant women in four counties of North Carolina. We simultaneously analyze six types of physical activity and investigate cross-dependencies between these activity types. Exploratory analysis suggests that the associations are different in different regions. Therefore we use a multivariate regression model with spatially-varying regression coefficients. This model includes a regression parameter for each covariate at each spatial location. For our data with many predictors, some form of dimension reduction is clearly needed. We introduce a Bayesian variable selection procedure to identify subsets of important variables. Our stochastic search algorithm determines the probabilities that each covariate’s effect is null, non-null but constant across space, and spatially-varying. We found that individual level covariates had a greater influence on women’s activity levels than neighborhood environmental characteristics, and some individual level covariates had spatially-varying associations with

  15. Overcoming multicollinearity in multiple regression using correlation coefficient

    NASA Astrophysics Data System (ADS)

    Zainodin, H. J.; Yap, S. J.

    2013-09-01

    Multicollinearity happens when there are high correlations among independent variables. In this case, it would be difficult to distinguish between the contributions of these independent variables to that of the dependent variable as they may compete to explain much of the similar variance. Besides, the problem of multicollinearity also violates the assumption of multiple regression: that there is no collinearity among the possible independent variables. Thus, an alternative approach is introduced in overcoming the multicollinearity problem in achieving a well represented model eventually. This approach is accomplished by removing the multicollinearity source variables on the basis of the correlation coefficient values based on full correlation matrix. Using the full correlation matrix can facilitate the implementation of Excel function in removing the multicollinearity source variables. It is found that this procedure is easier and time-saving especially when dealing with greater number of independent variables in a model and a large number of all possible models. Hence, in this paper detailed insight of the procedure is shown, compared and implemented.

  16. Estimation of adjusted rate differences using additive negative binomial regression.

    PubMed

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156

  17. Parametric expressions for the adjusted Hargreaves coefficient in Eastern Spain

    NASA Astrophysics Data System (ADS)

    Martí, Pau; Zarzo, Manuel; Vanderlinden, Karl; Girona, Joan

    2015-10-01

    The application of simple empirical equations for estimating reference evapotranspiration (ETo) is the only alternative in many cases to robust approaches with high input requirements, especially at the local scale. In particular, temperature-based approaches present a high potential applicability, among others, because temperature might explain a high amount of ETo variability, and also because it can be measured easily and is one of the most available climatic inputs. One of the most well-known temperature-based approaches, the Hargreaves (HG) equation, requires a preliminary local calibration that is usually performed through an adjustment of the HG coefficient (AHC). Nevertheless, these calibrations are site-specific, and cannot be extrapolated to other locations. So, they become useless in many situations, because they are derived from already available benchmarks based on more robust methods, which will be applied in practice. Therefore, the development of accurate equations for estimating AHC at local scale becomes a relevant task. This paper analyses the performance of calibrated and non-calibrated HG equations at 30 stations in Eastern Spain at daily, weekly, fortnightly and monthly scales. Moreover, multiple linear regression was applied for estimating AHC based on different inputs, and the resulting equations yielded higher performance accuracy than the non-calibrated HG estimates. The approach relying on the ratio mean temperature to temperature range did not provide suitable AHC estimations, and was highly improved by splitting it into two independent predictors. Temperature-based equations were improved by incorporating geographical inputs. Finally, the model relying on temperature and geographic inputs was further improved by incorporating wind speed, even just with simple qualitative information about wind category (e.g. poorly vs. highly windy). The accuracy of the calibrated and non-calibrated HG estimates increased for longer time steps (daily

  18. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    ERIC Educational Resources Information Center

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  19. The Use of Structure Coefficients in Regression Research.

    ERIC Educational Resources Information Center

    Perry, Lucille N.

    It is recognized that parametric methods (e.g., t-tests, discriminant analysis, and methods based on analysis of variance) are special cases of canonical correlation analysis. In canonical correlation it has been argued that structure coefficients must be computed to correctly interpret results. It follows that structure coefficients may be useful…

  20. Interpretation of Structure Coefficients Can Prevent Erroneous Conclusions about Regression Results.

    ERIC Educational Resources Information Center

    Whitaker, Jean S.

    The increased use of multiple regression analysis in research warrants closer examination of the coefficients produced in these analyses, especially ones which are often ignored, such as structure coefficients. Structure coefficients are bivariate correlation coefficients between a predictor variable and the synthetic variable. When predictor…

  1. The Importance of Structure Coefficients in Multiple Regression: A Review with Examples from Published Literature.

    ERIC Educational Resources Information Center

    Burdenski, Thomas K., Jr.

    This paper discusses the importance of interpreting both regression coefficients and structure coefficients when analyzing the results of multiple regression analysis, particularly with correlated predictor variables. The concepts of multicolinearity and suppressor effects are introduced, along with examples from the previously published articles…

  2. Interpreting Regression Results: beta Weights and Structure Coefficients are Both Important.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    Various realizations have led to less frequent use of the "OVA" methods (analysis of variance--ANOVA--among others) and to more frequent use of general linear model approaches such as regression. However, too few researchers understand all the various coefficients produced in regression. This paper explains these coefficients and their practical…

  3. Adjusting Aqua MODIS TEB nonlinear calibration coefficients using iterative solution

    NASA Astrophysics Data System (ADS)

    Wu, Aisheng; Wang, Zhipeng; Li, Yonghong; Madhavan, Sriharsha; Wenny, Brian N.; Chen, Na; Xiong, Xiaoxiong

    2014-11-01

    Radiometric calibration is important for continuity and reliability of any optical sensor data. The Moderate Resolution Imaging Spectroradiometer (MODIS) onboard NASA EOS (Earth Observing System) Aqua satellite has been nominally operating since its launch on May 4, 2002. The MODIS thermal emissive bands (TEB) are calibrated using a quadratic calibration algorithm and the dominant gain term is determined every scan by reference to a temperature-controlled blackbody (BB) with known emissivity. On a quarterly basis, a BB warm-up and cool-down (WUCD) process is scheduled to provide measurements to determine the offset and nonlinear coefficients used in the TEB calibration algorithm. For Aqua MODIS, the offset and nonlinear terms are based on the results from prelaunch thermal vacuum tests. However, on-orbit trending results show that they have small but noticeable drifts. To maintain data quality and consistency, an iterative approach is applied to adjust the prelaunch based nonlinear terms, which are currently used to produce Aqua MODIS Collection-6 L1B. This paper provides details on how to use an iterative solution to determine these calibration coefficients based on BB WUCD measurements. Validation is performed using simultaneous nadir overpasses (SNO) of Aqua MODIS and the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Metop-A satellite and near surface temperature measurements at Dome C on the Antarctic Plateau.

  4. On the causal interpretation of race in regressions adjusting for confounding and mediating variables.

    PubMed

    VanderWeele, Tyler J; Robinson, Whitney R

    2014-07-01

    We consider several possible interpretations of the "effect of race" when regressions are run with race as an exposure variable, controlling also for various confounding and mediating variables. When adjustment is made for socioeconomic status early in a person's life, we discuss under what contexts the regression coefficients for race can be interpreted as corresponding to the extent to which a racial inequality would remain if various socioeconomic distributions early in life across racial groups could be equalized. When adjustment is also made for adult socioeconomic status, we note how the overall racial inequality can be decomposed into the portion that would be eliminated by equalizing adult socioeconomic status across racial groups and the portion of the inequality that would remain even if adult socioeconomic status across racial groups were equalized. We also discuss a stronger interpretation of the effect of race (stronger in terms of assumptions) involving the joint effects of race-associated physical phenotype (eg, skin color), parental physical phenotype, genetic background, and cultural context when such variables are thought to be hypothetically manipulable and if adequate control for confounding were possible. We discuss some of the challenges with such an interpretation. Further discussion is given as to how the use of selected populations in examining racial disparities can additionally complicate the interpretation of the effects.

  5. On causal interpretation of race in regressions adjusting for confounding and mediating variables

    PubMed Central

    VanderWeele, Tyler J.; Robinson, Whitney R.

    2014-01-01

    We consider several possible interpretations of the “effect of race” when regressions are run with race as an exposure variable, controlling also for various confounding and mediating variables. When adjustment is made for socioeconomic status early in a person’s life, we discuss under what contexts the regression coefficients for race can be interpreted as corresponding to the extent to which a racial inequality would remain if various socioeconomic distributions early in life across racial groups could be equalized. When adjustment is also made for adult socioeconomic status, we note how the overall racial inequality can be decomposed into the portion that would be eliminated by equalizing adult socioeconomic status across racial groups and the portion of the inequality that would remain even if adult socioeconomic status across racial groups were equalized. We also discuss a stronger interpretation of the “effect of race” (stronger in terms of assumptions) involving the joint effects of race-associated physical phenotype (e.g. skin color), parental physical phenotype, genetic background and cultural context when such variables are thought to be hypothetically manipulable and if adequate control for confounding were possible. We discuss some of the challenges with such an interpretation. Further discussion is given as to how the use of selected populations in examining racial disparities can additionally complicate the interpretation of the effects. PMID:24887159

  6. Assessing Longitudinal Change: Adjustment for Regression to the Mean Effects

    ERIC Educational Resources Information Center

    Rocconi, Louis M.; Ethington, Corinna A.

    2009-01-01

    Pascarella (J Coll Stud Dev 47:508-520, 2006) has called for an increase in use of longitudinal data with pretest-posttest design when studying effects on college students. However, such designs that use multiple measures to document change are vulnerable to an important threat to internal validity, regression to the mean. Herein, we discuss a…

  7. Return period adjustment for runoff coefficients based on analysis in undeveloped Texas watersheds

    USGS Publications Warehouse

    Dhakal, Nirajan; Fang, Xing; Asquith, William H.; Cleveland, Theodore G.; Thompson, David B.

    2013-01-01

    The rational method for peak discharge (Qp) estimation was introduced in the 1880s. The runoff coefficient (C) is a key parameter for the rational method that has an implicit meaning of rate proportionality, and the C has been declared a function of the annual return period by various researchers. Rate-based runoff coefficients as a function of the return period, C(T), were determined for 36 undeveloped watersheds in Texas using peak discharge frequency from previously published regional regression equations and rainfall intensity frequency for return periods T of 2, 5, 10, 25, 50, and 100 years. The C(T) values and return period adjustments C(T)/C(T=10  year) determined in this study are most applicable to undeveloped watersheds. The return period adjustments determined for the Texas watersheds in this study and those extracted from prior studies of non-Texas data exceed values from well-known literature such as design manuals and textbooks. Most importantly, the return period adjustments exceed values currently recognized in Texas Department of Transportation design guidance when T>10  years.

  8. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    PubMed

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.

  9. A Tutorial on Calculating and Interpreting Regression Coefficients in Health Behavior Research

    ERIC Educational Resources Information Center

    Stellefson, Michael L.; Hanik, Bruce W.; Chaney, Beth H.; Chaney, J. Don

    2008-01-01

    Regression analyses are frequently employed by health educators who conduct empirical research examining a variety of health behaviors. Within regression, there are a variety of coefficients produced, which are not always easily understood and/or articulated by health education researchers. It is important to not only understand what these…

  10. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, Anne B.; Lizarraga, Joy S.

    1996-01-01

    Statistical operations termed model-adjustment procedures can be used to incorporate local data into existing regression modes to improve the predication of urban-runoff quality. Each procedure is a form of regression analysis in which the local data base is used as a calibration data set; the resulting adjusted regression models can then be used to predict storm-runoff quality at unmonitored sites. Statistical tests of the calibration data set guide selection among proposed procedures.

  11. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  12. Use of Structure Coefficients in Published Multiple Regression Articles: Beta Is Not Enough.

    ERIC Educational Resources Information Center

    Courville, Troy; Thompson, Bruce

    2001-01-01

    Reviewed articles published in the "Journal of Applied Psychology" (JAP) to determine how interpretations might have differed if standardized regression coefficients and structure coefficients (or bivariate "r"s of predictors with the criterion) had been interpreted. Summarizes some dramatic misinterpretations or incomplete interpretations.…

  13. Estimation of diffusion coefficients from voltammetric signals by support vector and gaussian process regression

    PubMed Central

    2014-01-01

    Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463

  14. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  15. The application of Dynamic Linear Bayesian Models in hydrological forecasting: Varying Coefficient Regression and Discount Weighted Regression

    NASA Astrophysics Data System (ADS)

    Ciupak, Maurycy; Ozga-Zielinski, Bogdan; Adamowski, Jan; Quilty, John; Khalil, Bahaa

    2015-11-01

    A novel implementation of Dynamic Linear Bayesian Models (DLBM), using either a Varying Coefficient Regression (VCR) or a Discount Weighted Regression (DWR) algorithm was used in the hydrological modeling of annual hydrographs as well as 1-, 2-, and 3-day lead time stream flow forecasting. Using hydrological data (daily discharge, rainfall, and mean, maximum and minimum air temperatures) from the Upper Narew River watershed in Poland, the forecasting performance of DLBM was compared to that of traditional multiple linear regression (MLR) and more recent artificial neural network (ANN) based models. Model performance was ranked DLBM-DWR > DLBM-VCR > MLR > ANN for both annual hydrograph modeling and 1-, 2-, and 3-day lead forecasting, indicating that the DWR and VCR algorithms, operating in a DLBM framework, represent promising new methods for both annual hydrograph modeling and short-term stream flow forecasting.

  16. Adjustment of regional regression equations for urban storm-runoff quality using at-site data

    USGS Publications Warehouse

    Barks, C.S.

    1996-01-01

    Regional regression equations have been developed to estimate urban storm-runoff loads and mean concentrations using a national data base. Four statistical methods using at-site data to adjust the regional equation predictions were developed to provide better local estimates. The four adjustment procedures are a single-factor adjustment, a regression of the observed data against the predicted values, a regression of the observed values against the predicted values and additional local independent variables, and a weighted combination of a local regression with the regional prediction. Data collected at five representative storm-runoff sites during 22 storms in Little Rock, Arkansas, were used to verify, and, when appropriate, adjust the regional regression equation predictions. Comparison of observed values of stormrunoff loads and mean concentrations to the predicted values from the regional regression equations for nine constituents (chemical oxygen demand, suspended solids, total nitrogen as N, total ammonia plus organic nitrogen as N, total phosphorus as P, dissolved phosphorus as P, total recoverable copper, total recoverable lead, and total recoverable zinc) showed large prediction errors ranging from 63 percent to more than several thousand percent. Prediction errors for 6 of the 18 regional regression equations were less than 100 percent and could be considered reasonable for water-quality prediction equations. The regression adjustment procedure was used to adjust five of the regional equation predictions to improve the predictive accuracy. For seven of the regional equations the observed and the predicted values are not significantly correlated. Thus neither the unadjusted regional equations nor any of the adjustments were appropriate. The mean of the observed values was used as a simple estimator when the regional equation predictions and adjusted predictions were not appropriate.

  17. Using Raw VAR Regression Coefficients to Build Networks can be Misleading.

    PubMed

    Bulteel, Kirsten; Tuerlinckx, Francis; Brose, Annette; Ceulemans, Eva

    2016-01-01

    Many questions in the behavioral sciences focus on the causal interplay of a number of variables across time. To reveal the dynamic relations between the variables, their (auto- or cross-) regressive effects across time may be inspected by fitting a lag-one vector autoregressive, or VAR(1), model and visualizing the resulting regression coefficients as the edges of a weighted directed network. Usually, the raw VAR(1) regression coefficients are drawn, but we argue that this may yield misleading network figures and characteristics because of two problems. First, the raw regression coefficients are sensitive to scale and variance differences among the variables and therefore may lack comparability, which is needed if one wants to calculate, for example, centrality measures. Second, they only represent the unique direct effects of the variables, which may give a distorted picture when variables correlate strongly. To deal with these problems, we propose to use other VAR(1)-based measures as edges. Specifically, to solve the comparability issue, the standardized VAR(1) regression coefficients can be displayed. Furthermore, relative importance metrics can be computed to include direct as well as shared and indirect effects into the network.

  18. Using Raw VAR Regression Coefficients to Build Networks can be Misleading.

    PubMed

    Bulteel, Kirsten; Tuerlinckx, Francis; Brose, Annette; Ceulemans, Eva

    2016-01-01

    Many questions in the behavioral sciences focus on the causal interplay of a number of variables across time. To reveal the dynamic relations between the variables, their (auto- or cross-) regressive effects across time may be inspected by fitting a lag-one vector autoregressive, or VAR(1), model and visualizing the resulting regression coefficients as the edges of a weighted directed network. Usually, the raw VAR(1) regression coefficients are drawn, but we argue that this may yield misleading network figures and characteristics because of two problems. First, the raw regression coefficients are sensitive to scale and variance differences among the variables and therefore may lack comparability, which is needed if one wants to calculate, for example, centrality measures. Second, they only represent the unique direct effects of the variables, which may give a distorted picture when variables correlate strongly. To deal with these problems, we propose to use other VAR(1)-based measures as edges. Specifically, to solve the comparability issue, the standardized VAR(1) regression coefficients can be displayed. Furthermore, relative importance metrics can be computed to include direct as well as shared and indirect effects into the network. PMID:27028486

  19. Material grain size characterization method based on energy attenuation coefficient spectrum and support vector regression.

    PubMed

    Li, Min; Zhou, Tong; Song, Yanan

    2016-07-01

    A grain size characterization method based on energy attenuation coefficient spectrum and support vector regression (SVR) is proposed. First, the spectra of the first and second back-wall echoes are cut into several frequency bands to calculate the energy attenuation coefficient spectrum. Second, the frequency band that is sensitive to grain size variation is determined. Finally, a statistical model between the energy attenuation coefficient in the sensitive frequency band and average grain size is established through SVR. Experimental verification is conducted on austenitic stainless steel. The average relative error of the predicted grain size is 5.65%, which is better than that of conventional methods.

  20. Bayes and Empirical Bayes Shrinkage Estimation of Regression Coefficients: A Cross-Validation Study.

    ERIC Educational Resources Information Center

    Nebebe, Fassil; Stroud, T. W. F.

    1988-01-01

    Bayesian and empirical Bayes approaches to shrinkage estimation of regression coefficients and uses of these in prediction (i.e., analyzing intelligence test data of children with learning problems) are investigated. The two methods are consistently better at predicting response variables than are either least squares or least absolute deviations.…

  1. A Note on the Relationship between the Number of Indicators and Their Reliability in Detecting Regression Coefficients in Latent Regression Analysis

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Wicherts, Jelte M.; Molenaar, Peter C. M.

    2004-01-01

    We consider the question of how variation in the number and reliability of indicators affects the power to reject the hypothesis that the regression coefficients are zero in latent linear regression analysis. We show that power remains constant as long as the coefficient of determination remains unchanged. Any increase in the number of indicators…

  2. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  3. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    ERIC Educational Resources Information Center

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  4. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  5. Derivation of regression coefficients for sea surface temperature retrieval over East Asia

    NASA Astrophysics Data System (ADS)

    Ahn, Myoung-Hwan; Sohn, Eun-Ha; Hwang, Byong-Jun; Chung, Chu-Yong; Wu, Xiangqian

    2006-05-01

    Among the regression-based algorithms for deriving SST from satellite measurements, regionally optimized algorithms normally perform better than the corresponding global algorithm. In this paper, three algorithms are considered for SST retrieval over the East Asia region (15° 55°N, 105° 170°E), including the multi-channel algorithm (MCSST), the quadratic algorithm (QSST), and the Pathfinder algorithm (PFSST). All algorithms are derived and validated using collocated buoy and Geostationary Meteorological Satellite (GMS-5) observations from 1997 to 2001. An important part of the derivation and validation of the algorithms is the quality control procedure for the buoy SST data and an improved cloud screening method for the satellite brightness temperature measurements. The regionally optimized MCSST algorithm shows an overall improvement over the global algorithm, removing the bias of about -0.13°C and reducing the root-mean-square difference (rmsd) from 1.36°C to 1.26°C. The QSST is only slightly better than the MCSST. For both algorithms, a seasonal dependence of the remaining error statistics is still evident. The Pathfinder approach for deriving a season-specific set of coefficients, one for August to October and one for the rest of the year, provides the smallest rmsd overall that is also stable over time.

  6. Estimating the average treatment effects of nutritional label use using subclassification with regression adjustment.

    PubMed

    Lopez, Michael J; Gutman, Roee

    2014-11-28

    Propensity score methods are common for estimating a binary treatment effect when treatment assignment is not randomized. When exposure is measured on an ordinal scale (i.e. low-medium-high), however, propensity score inference requires extensions which have received limited attention. Estimands of possible interest with an ordinal exposure are the average treatment effects between each pair of exposure levels. Using these estimands, it is possible to determine an optimal exposure level. Traditional methods, including dichotomization of the exposure or a series of binary propensity score comparisons across exposure pairs, are generally inadequate for identification of optimal levels. We combine subclassification with regression adjustment to estimate transitive, unbiased average causal effects across an ordered exposure, and apply our method on the 2005-2006 National Health and Nutrition Examination Survey to estimate the effects of nutritional label use on body mass index.

  7. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  8. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    ERIC Educational Resources Information Center

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  9. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    PubMed

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  10. On the adjusting of the dynamic coefficients of tilting-pad journal bearings

    SciTech Connect

    Santos, I.F.

    1995-07-01

    This paper gives a theoretical and experimental contribution to the problem of active modification of the dynamic coefficients of tilting-pad journal bearings, aiming to increase the damping and stability of rotating systems. The theoretical studies for the calculation of the bearing coefficients are based on the fluid dynamics, specifically on the Reynolds equation, on the dynamics of multibody systems and on some concepts of the hydraulics. The experiments are carried out by means of a test rig specially designed for this investigation. The four pads of such a bearing are mounted on four flexible hydraulic chambers which are connected to a proportional valve. The chamber pressures are changed by means of the proportional value, resulting in a displacement of the pads and a modification of the bearing gap. By changing the gap, one can adjust the dynamic coefficients of the bearing. With help of an experimental procedure for identifying the bearing coefficients, theoretical and experimental results are compared and discussed. The advantages and the limitation of such hydrodynamic bearings in their controllable form are evaluated with regard to application on the high-speed machines.

  11. Understanding and Interpreting Regression Parameter Estimates in Given Contexts: A Monte Carlo Study of Characteristics of Regression and Structural Coefficients, Effect Size R Squared and Significance Level of Predictors.

    ERIC Educational Resources Information Center

    Jiang, Ying Hong; Smith, Philip L.

    This Monte Carlo study explored relationships among standard and unstandardized regression coefficients, structural coefficients, multiple R_ squared, and significance level of predictors for a variety of linear regression scenarios. Ten regression models with three predictors were included, and four conditions were varied that were expected to…

  12. Multiple Regression Methodology in the "Journal of Education for Students Placed at Risk": Effect Sizes and Structure Coefficients.

    ERIC Educational Resources Information Center

    Herring, Jennifer C.

    This study reviewed the statistical practices in published research articles in the Journal of Education for Students Placed at Risk to determine the reporting of effect sizes and structure coefficients. Of the 12 quantitative studies found in the last 3 volumes of the journal, only 3 were identified as using multiple regression analysis. Two of…

  13. Estimating the Coefficient of Cross-validity in Multiple Regression: A Comparison of Analytical and Empirical Methods.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1996-01-01

    The accuracy of three analytical formulas for shrinkage estimation and four empirical techniques were investigated in a Monte Carlo study of the coefficient of cross-validity in multiple regression. Substantial statistical bias was evident for all techniques except the formula of M. W. Brown (1975) and multicross-validation. (SLD)

  14. A Comparison between the Use of Beta Weights and Structure Coefficients in Interpreting Regression Results

    ERIC Educational Resources Information Center

    Tong, Fuhui

    2006-01-01

    Background: An extensive body of researches has favored the use of regression over other parametric analyses that are based on OVA. In case of noteworthy regression results, researchers tend to explore magnitude of beta weights for the respective predictors. Purpose: The purpose of this paper is to examine both beta weights and structure…

  15. Anthropometrical data and coefficients of regression related to gender and race.

    PubMed

    Shan, Gongbing; Bohn, Christiane

    2003-07-01

    As a result of migration and globalization, the requirement for anthropometrical data of distinct races and gender has augmented whilst the availability remained minimal. Therefore, several sets of estimation equations, which depend on gender, race, body height (BH), and body mass (BM), were established in this study to fulfill this necessity. The method consisted of: (a) an inexpensive device to scan the body surface, (b) the electronic reconstruction of the body surface and (c) a module to calculate segmental lengths, segmental masses, radii of gyration and moments of inertia, using the 16-segment model (Zatsiorsky, 1983) and density data of Dempster (Space requirements of the seated operator, WADC Technical Report, Wright-Patterson Air Force Base, Ohio, 1995, pp. 55-159), and (d) the establishment of regression equations. One hundred young Chinese and Germans, representing the Asian and Caucasian races, were randomly recruited to participate in this study. The results revealed contrasting trunk, limb lengths and relative skull volume (skull volume/body volume) between the two races as well as the independence of head mass from body height. The regression equations, which were successfully derived based on the above-unveiled differences, are capable of supplying a prompt way to obtain all anthropometrical parameters of different genders and race groups through individual BM and BH. Anthropometrical data are related to gender, race, BH and BM. In order to obtain the data, one can utilize various measurements, which might have enormous financial expenditure in addition to time-consumption or employ the convenient and economical short-cut-regression-to obtain such data. The results of this study reveal that the accuracy of such estimations is high. The errors of predictions lie under 0.7 Standard deviation, which will satisfy most of applications.

  16. Detection of melamine in milk powders using near-infrared hyperspectral imaging combined with regression coefficient of partial least square regression model.

    PubMed

    Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Fu, Xiaping; Baek, Insuck; Cho, Byoung-Kwan

    2016-05-01

    Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immunosorbent assay (ELISA), High-performance liquid chromatography (HPLC), and Gas chromatography-mass spectrometry (GC-MS), are sensitive but they are time-consuming, expensive, and labor-intensive. In this research, near-infrared (NIR) hyperspectral imaging technique combined with regression coefficient of partial least squares regression (PLSR) model was used to detect melamine particles in milk powders easily and quickly. NIR hyperspectral reflectance imaging data in the spectral range of 990-1700nm were acquired from melamine-milk powder mixture samples prepared at various concentrations ranging from 0.02% to 1%. PLSR models were developed to correlate the spectral data (independent variables) with melamine concentration (dependent variables) in melamine-milk powder mixture samples. PLSR models applying various pretreatment methods were used to reconstruct the two-dimensional PLS images. PLS images were converted to the binary images to detect the suspected melamine pixels in milk powder. As the melamine concentration was increased, the numbers of suspected melamine pixels of binary images were also increased. These results suggested that NIR hyperspectral imaging technique and the PLSR model can be regarded as an effective tool to detect melamine particles in milk powders. PMID:26946026

  17. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    USGS Publications Warehouse

    Li, J.; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  18. Verification and adjustment of regional regression models for urban storm-runoff quality using data collected in Little Rock, Arkansas

    USGS Publications Warehouse

    Barks, C.S.

    1995-01-01

    Storm-runoff water-quality data were used to verify and, when appropriate, adjust regional regression models previously developed to estimate urban storm- runoff loads and mean concentrations in Little Rock, Arkansas. Data collected at 5 representative sites during 22 storms from June 1992 through January 1994 compose the Little Rock data base. Comparison of observed values (0) of storm-runoff loads and mean concentrations to the predicted values (Pu) from the regional regression models for nine constituents (chemical oxygen demand, suspended solids, total nitrogen, total ammonia plus organic nitrogen as nitrogen, total phosphorus, dissolved phosphorus, total recoverable copper, total recoverable lead, and total recoverable zinc) shows large prediction errors ranging from 63 to several thousand percent. Prediction errors for six of the regional regression models are less than 100 percent, and can be considered reasonable for water-quality models. Differences between 0 and Pu are due to variability in the Little Rock data base and error in the regional models. Where applicable, a model adjustment procedure (termed MAP-R-P) based upon regression with 0 against Pu was applied to improve predictive accuracy. For 11 of the 18 regional water-quality models, 0 and Pu are significantly correlated, that is much of the variation in 0 is explained by the regional models. Five of these 11 regional models consistently overestimate O; therefore, MAP-R-P can be used to provide a better estimate. For the remaining seven regional models, 0 and Pu are not significanfly correlated, thus neither the unadjusted regional models nor the MAP-R-P is appropriate. A simple estimator, such as the mean of the observed values may be used if the regression models are not appropriate. Standard error of estimate of the adjusted models ranges from 48 to 130 percent. Calibration results may be biased due to the limited data set sizes in the Little Rock data base. The relatively large values of

  19. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    PubMed

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025

  20. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  1. Simulation study comparing exposure matching with regression adjustment in an observational safety setting with group sequential monitoring.

    PubMed

    Stratton, Kelly G; Cook, Andrea J; Jackson, Lisa A; Nelson, Jennifer C

    2015-03-30

    Sequential methods are well established for randomized clinical trials (RCTs), and their use in observational settings has increased with the development of national vaccine and drug safety surveillance systems that monitor large healthcare databases. Observational safety monitoring requires that sequential testing methods be better equipped to incorporate confounder adjustment and accommodate rare adverse events. New methods designed specifically for observational surveillance include a group sequential likelihood ratio test that uses exposure matching and generalized estimating equations approach that involves regression adjustment. However, little is known about the statistical performance of these methods or how they compare to RCT methods in both observational and rare outcome settings. We conducted a simulation study to determine the type I error, power and time-to-surveillance-end of group sequential likelihood ratio test, generalized estimating equations and RCT methods that construct group sequential Lan-DeMets boundaries using data from a matched (group sequential Lan-DeMets-matching) or unmatched regression (group sequential Lan-DeMets-regression) setting. We also compared the methods using data from a multisite vaccine safety study. All methods had acceptable type I error, but regression methods were more powerful, faster at detecting true safety signals and less prone to implementation difficulties with rare events than exposure matching methods. Method performance also depended on the distribution of information and extent of confounding by site. Our results suggest that choice of sequential method, especially the confounder control strategy, is critical in rare event observational settings. These findings provide guidance for choosing methods in this context and, in particular, suggest caution when conducting exposure matching.

  2. Methods for Adjusting U.S. Geological Survey Rural Regression Peak Discharges in an Urban Setting

    USGS Publications Warehouse

    Moglen, Glenn E.; Shivers, Dorianne E.

    2006-01-01

    A study was conducted of 78 U.S. Geological Survey gaged streams that have been subjected to varying degrees of urbanization over the last three decades. Flood-frequency analysis coupled with nonlinear regression techniques were used to generate a set of equations for converting peak discharge estimates determined from rural regression equations to a set of peak discharge estimates that represent known urbanization. Specifically, urban regression equations for the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year return periods were calibrated as a function of the corresponding rural peak discharge and the percentage of impervious area in a watershed. The results of this study indicate that two sets of equations, one set based on imperviousness and one set based on population density, performed well. Both sets of equations are dependent on rural peak discharges, a measure of development (average percentage of imperviousness or average population density), and a measure of homogeneity of development within a watershed. Average imperviousness was readily determined by using geographic information system methods and commonly available land-cover data. Similarly, average population density was easily determined from census data. Thus, a key advantage to the equations developed in this study is that they do not require field measurements of watershed characteristics as did the U.S. Geological Survey urban equations developed in an earlier investigation. During this study, the U.S. Geological Survey PeakFQ program was used as an integral tool in the calibration of all equations. The scarcity of historical land-use data, however, made exclusive use of flow records necessary for the 30-year period from 1970 to 2000. Such relatively short-duration streamflow time series required a nonstandard treatment of the historical data function of the PeakFQ program in comparison to published guidelines. Thus, the approach used during this investigation does not fully comply with the

  3. A self-adjusting flow dependent formulation for the classical Smagorinsky model coefficient

    NASA Astrophysics Data System (ADS)

    Ghorbaniasl, G.; Agnihotri, V.; Lacor, C.

    2013-05-01

    In this paper, we propose an efficient formula for estimating the model coefficient of a Smagorinsky model based subgrid scale eddy viscosity. The method allows vanishing eddy viscosity through a vanishing model coefficient in regions where the eddy viscosity should be zero. The advantage of this method is that the coefficient of the subgrid scale model is a function of the flow solution, including the translational and the rotational velocity field contributions. Furthermore, the value of model coefficient is optimized without using the dynamic procedure thereby saving significantly on computational cost. In addition, the method guarantees the model coefficient to be always positive with low fluctuation in space and time. For validation purposes, three test cases are chosen: (i) a fully developed channel flow at {mathopRenolimits} _tau = 180, 395, (ii) a fully developed flow through a rectangular duct of square cross section at {mathopRenolimits} _tau = 300, and (iii) a smooth subcritical flow past a stationary circular cylinder, at a Reynolds number of {mathopRenolimits} = 3900, where the wake is fully turbulent but the cylinder boundary layers remain laminar. A main outcome is the good behavior of the proposed model as compared to reference data. We have also applied the proposed method to a CT-based simplified human upper airway model, where the flow is transient.

  4. Adjustment of minimum seismic shear coefficient considering site effects for long-period structures

    NASA Astrophysics Data System (ADS)

    Guan, Minsheng; Du, Hongbiao; Cui, Jie; Zeng, Qingli; Jiang, Haibo

    2016-06-01

    Minimum seismic base shear is a key factor employed in the seismic design of long-period structures, which is specified in some of the major national seismic building codes viz. ASCE7-10, NZS1170.5 and GB50011-2010. In current Chinese seismic design code GB50011-2010, however, effects of soil types on the minimum seismic shear coefficient are not considered, which causes problems for long-period structures sited in hard or rock soil to meet the minimum base shear requirement. This paper aims to modify the current minimum seismic shear coefficient by taking into account site effects. For this purpose, effective peak acceleration (EPA) is used as a representation for the ordinate value of the design response spectrum at the plateau. A large amount of earthquake records, for which EPAs are calculated, are examined through the statistical analysis by considering soil conditions as well as the seismic fortification intensities. The study indicates that soil types have a significant effect on the spectral ordinates at the plateau as well as the minimum seismic shear coefficient. Modified factors related to the current minimum seismic shear coefficient are preliminarily suggested for each site class. It is shown that the modified seismic shear coefficients are more effective to the determination of minimum seismic base shear of long-period structures.

  5. Electrocrystallization of uranium oxides from molten salt media with adjustable oxygen coefficient value

    SciTech Connect

    Smolenski, V.V.; Bovet, A.L.; Komarov, V.E.

    1993-12-31

    In order to determine the conditions necessary to produce uranium oxides with the preset oxygen coefficient value by electrolyzing salt electrolytes, we have investigated the chemical and electrochemical behavior of the oxychloride uranium compounds with various valancies in molten alkali chlorides. The electrochemical production conditions for uranium dioxide of the pre-stoichiometric composition were determined.

  6. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment.

  7. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. PMID:26338096

  8. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    ERIC Educational Resources Information Center

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  9. The Normal-Theory and Asymptotic Distribution-Free (ADF) Covariance Matrix of Standardized Regression Coefficients: Theoretical Extensions and Finite Sample Behavior.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2015-06-01

    Yuan and Chan (Psychometrika, 76, 670-690, 2011) recently showed how to compute the covariance matrix of standardized regression coefficients from covariances. In this paper, we describe a method for computing this covariance matrix from correlations. Next, we describe an asymptotic distribution-free (ADF; Browne in British Journal of Mathematical and Statistical Psychology, 37, 62-83, 1984) method for computing the covariance matrix of standardized regression coefficients. We show that the ADF method works well with nonnormal data in moderate-to-large samples using both simulated and real-data examples. R code (R Development Core Team, 2012) is available from the authors or through the Psychometrika online repository for supplementary materials. PMID:24362970

  10. The Normal-Theory and Asymptotic Distribution-Free (ADF) Covariance Matrix of Standardized Regression Coefficients: Theoretical Extensions and Finite Sample Behavior.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2015-06-01

    Yuan and Chan (Psychometrika, 76, 670-690, 2011) recently showed how to compute the covariance matrix of standardized regression coefficients from covariances. In this paper, we describe a method for computing this covariance matrix from correlations. Next, we describe an asymptotic distribution-free (ADF; Browne in British Journal of Mathematical and Statistical Psychology, 37, 62-83, 1984) method for computing the covariance matrix of standardized regression coefficients. We show that the ADF method works well with nonnormal data in moderate-to-large samples using both simulated and real-data examples. R code (R Development Core Team, 2012) is available from the authors or through the Psychometrika online repository for supplementary materials.

  11. R Squared Shrinkage in Multiple Regression Research: An Empirical Evaluation of Use and Impact of Adjusted Effect Formulae.

    ERIC Educational Resources Information Center

    Thatcher, Greg W.; Henson, Robin K.

    This study examined research in training and development to determine effect size reporting practices. It focused on the reporting of corrected effect sizes in research articles using multiple regression analyses. When possible, researchers calculated corrected effect sizes and determine if the associated shrinkage could have impacted researcher…

  12. Small-Sample Adjustments for Tests of Moderators and Model Fit in Robust Variance Estimation in Meta-Regression

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Pustejovsky, James E.

    2015-01-01

    Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…

  13. Comparison of Regression Methods to Compute Atmospheric Pressure and Earth Tidal Coefficients in Water Level Associated with Wenchuan Earthquake of 12 May 2008

    NASA Astrophysics Data System (ADS)

    He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang

    2016-07-01

    The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.

  14. Improved power control using optimal adjustable coefficients for three-phase photovoltaic inverter under unbalanced grid voltage.

    PubMed

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software. PMID:25243215

  15. Improved Power Control Using Optimal Adjustable Coefficients for Three-Phase Photovoltaic Inverter under Unbalanced Grid Voltage

    PubMed Central

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software. PMID:25243215

  16. Improved power control using optimal adjustable coefficients for three-phase photovoltaic inverter under unbalanced grid voltage.

    PubMed

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software.

  17. Data for and adjusted regional regression models of volume and quality of urban storm-water runoff in Boise and Garden City, Idaho, 1993-94

    USGS Publications Warehouse

    Kjelstrom, L.C.

    1995-01-01

    Previously developed U.S. Geological Survey regional regression models of runoff and 11 chemical constituents were evaluated to assess their suitability for use in urban areas in Boise and Garden City. Data collected in the study area were used to develop adjusted regional models of storm-runoff volumes and mean concentrations and loads of chemical oxygen demand, dissolved and suspended solids, total nitrogen and total ammonia plus organic nitrogen as nitrogen, total and dissolved phosphorus, and total recoverable cadmium, copper, lead, and zinc. Explanatory variables used in these models were drainage area, impervious area, land-use information, and precipitation data. Mean annual runoff volume and loads at the five outfalls were estimated from 904 individual storms during 1976 through 1993. Two methods were used to compute individual storm loads. The first method used adjusted regional models of storm loads and the second used adjusted regional models for mean concentration and runoff volume. For large storms, the first method seemed to produce excessively high loads for some constituents and the second method provided more reliable results for all constituents except suspended solids. The first method provided more reliable results for large storms for suspended solids.

  18. Regression equations for estimation of annual peak-streamflow frequency for undeveloped watersheds in Texas using an L-moment-based, PRESS-minimized, residual-adjusted approach

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2009-01-01

    Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed

  19. A regional classification scheme for estimating reference water quality in streams using land-use-adjusted spatial regression-tree analysis

    USGS Publications Warehouse

    Robertson, D.M.; Saad, D.A.; Heisey, D.M.

    2006-01-01

    Various approaches are used to subdivide large areas into regions containing streams that have similar reference or background water quality and that respond similarly to different factors. For many applications, such as establishing reference conditions, it is preferable to use physical characteristics that are not affected by human activities to delineate these regions. However, most approaches, such as ecoregion classifications, rely on land use to delineate regions or have difficulties compensating for the effects of land use. Land use not only directly affects water quality, but it is often correlated with the factors used to define the regions. In this article, we describe modifications to SPARTA (spatial regression-tree analysis), a relatively new approach applied to water-quality and environmental characteristic data to delineate zones with similar factors affecting water quality. In this modified approach, land-use-adjusted (residualized) water quality and environmental characteristics are computed for each site. Regression-tree analysis is applied to the residualized data to determine the most statistically important environmental characteristics describing the distribution of a specific water-quality constituent. Geographic information for small basins throughout the study area is then used to subdivide the area into relatively homogeneous environmental water-quality zones. For each zone, commonly used approaches are subsequently used to define its reference water quality and how its water quality responds to changes in land use. SPARTA is used to delineate zones of similar reference concentrations of total phosphorus and suspended sediment throughout the upper Midwestern part of the United States. ?? 2006 Springer Science+Business Media, Inc.

  20. Calculating the Dose of Subcutaneous Immunoglobulin for Primary Immunodeficiency Disease in Patients Switched From Intravenous to Subcutaneous Immunoglobulin Without the Use of a Dose-Adjustment Coefficient

    PubMed Central

    Fadeyi, Michael; Tran, Tin

    2013-01-01

    Primary immunodeficiency disease (PIDD) is an inherited disorder characterized by an inadequate immune system. The most common type of PIDD is antibody deficiency. Patients with this disorder lack the ability to make functional immunoglobulin G (IgG) and require lifelong IgG replacement therapy to prevent serious bacterial infections. The current standard therapy for PIDD is intravenous immunoglobulin (IVIG) infusions, but IVIG might not be appropriate for all patients. For this reason, subcutaneous immunoglobulin (SCIG) has emerged as an alternative to IVIG. A concern for physicians is the precise SCIG dose that should be prescribed, because there are pharmacokinetic differences between IVIG and SCIG. Manufacturers of SCIG 10% and 20% liquid (immune globulin subcutaneous [human]) recommend a dose-adjustment coefficient (DAC). Both strengths are currently approved by the FDA. This DAC is to be used when patients are switched from IVIG to SCIG. In this article, we propose another dosing method that uses a higher ratio of IVIG to SCIG and an incremental adjustment based on clinical status, body weight, and the presence of concurrent diseases. PMID:24391400

  1. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  2. Novel Logistic Regression Model of Chest CT Attenuation Coefficient Distributions for the Automated Detection of Abnormal (Emphysema or ILD) versus Normal Lung

    PubMed Central

    Chan, Kung-Sik; Jiao, Feiran; Mikulski, Marek A.; Gerke, Alicia; Guo, Junfeng; Newell, John D; Hoffman, Eric A.; Thompson, Brad; Lee, Chang Hyun; Fuortes, Laurence J.

    2015-01-01

    Rationale and Objectives We evaluated the role of automated quantitative computed tomography (CT) scan interpretation algorithm in detecting Interstitial Lung Disease (ILD) and/or emphysema in a sample of elderly subjects with mild lung disease.ypothesized that the quantification and distributions of CT attenuation values on lung CT, over a subset of Hounsfield Units (HU) range [−1000 HU, 0 HU], can differentiate early or mild disease from normal lung. Materials and Methods We compared results of quantitative spiral rapid end-exhalation (functional residual capacity; FRC) and end-inhalation (total lung capacity; TLC) CT scan analyses in 52 subjects with radiographic evidence of mild fibrotic lung disease to 17 normal subjects. Several CT value distributions were explored, including (i) that from the peripheral lung taken at TLC (with peels at 15 or 65mm), (ii) the ratio of (i) to that from the core of lung, and (iii) the ratio of (ii) to its FRC counterpart. We developed a fused-lasso logistic regression model that can automatically identify sub-intervals of [−1000 HU, 0 HU] over which a CT value distribution provides optimal discrimination between abnormal and normal scans. Results The fused-lasso logistic regression model based on (ii) with 15 mm peel identified the relative frequency of CT values over [−1000, −900] and that over [−450,−200] HU as a means of discriminating abnormal versus normal, resulting in a zero out-sample false positive rate and 15%false negative rate of that was lowered to 12% by pooling information. Conclusions We demonstrated the potential usefulness of this novel quantitative imaging analysis method in discriminating ILD and/or emphysema from normal lungs. PMID:26776294

  3. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    USGS Publications Warehouse

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  4. Factor Scores, Structure Coefficients, and Communality Coefficients

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    This paper presents heuristic explanations of factor scores, structure coefficients, and communality coefficients. Common misconceptions regarding these topics are clarified. In addition, (a) the regression (b) Bartlett, (c) Anderson-Rubin, and (d) Thompson methods for calculating factor scores are reviewed. Syntax necessary to execute all four…

  5. Multiple linear regression analysis

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  6. Teaching and hospital production: the use of regression estimates.

    PubMed

    Lehner, L A; Burgess, J F

    1995-01-01

    Medicare's Prospective Payment System pays U.S. teaching hospitals for the indirect costs of medical education based on a regression coefficient in a cost function. In regression studies using health care data, it is common for explanatory variables to be measured imperfectly, yet the potential for measurement error is often ignored. In this paper, U.S. Department of Veterans Affairs data is used to examine issues of health care production estimation and the use of regression estimates like the teaching adjustment factor. The findings show that measurement error and persistent multicollinearity confound attempts to have a large degree of confidence in the precise magnitude of parameter estimates.

  7. ``Regressed experts'' as a new state in teachers' professional development: lessons from Computer Science teachers' adjustments to substantial changes in the curriculum

    NASA Astrophysics Data System (ADS)

    Liberman, Neomi; Ben-David Kolikant, Yifat; Beeri, Catriel

    2012-09-01

    Due to a program reform in Israel, experienced CS high-school teachers faced the need to master and teach a new programming paradigm. This situation served as an opportunity to explore the relationship between teachers' content knowledge (CK) and their pedagogical content knowledge (PCK). This article focuses on three case studies, with emphasis on one of them. Using observations and interviews, we examine how the teachers, we observed taught and what development of their teaching occurred as a result of their teaching experience, if at all. Our findings suggest that this situation creates a new hybrid state of teachers, which we term "regressed experts." These teachers incorporate in their professional practice some elements typical of novices and some typical of experts. We also found that these teachers' experience, although established when teaching a different CK, serve as a leverage to improve their knowledge and understanding of aspects of the new content.

  8. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  9. Background stratified Poisson regression analysis of cohort data

    PubMed Central

    Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  10. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  11. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  12. Autistic Regression

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  13. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  14. Transfer Learning Based on Logistic Regression

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.

  15. Retro-regression--another important multivariate regression improvement.

    PubMed

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA. PMID:11410035

  16. Hybrid fuzzy regression with trapezoidal fuzzy data

    NASA Astrophysics Data System (ADS)

    Razzaghnia, T.; Danesh, S.; Maleki, A.

    2011-12-01

    In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with Y-H.O. chang's model.

  17. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  18. [Structural adjustment, cultural adjustment?].

    PubMed

    Dujardin, B; Dujardin, M; Hermans, I

    2003-12-01

    Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.

  19. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  20. Regression: A Bibliography.

    ERIC Educational Resources Information Center

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  1. Some Simple Computational Formulas for Multiple Regression

    ERIC Educational Resources Information Center

    Aiken, Lewis R., Jr.

    1974-01-01

    Short-cut formulas are presented for direct computation of the beta weights, the standard errors of the beta weights, and the multiple correlation coefficient for multiple regression problems involving three independent variables and one dependent variable. (Author)

  2. Application and Interpretation of Hierarchical Multiple Regression.

    PubMed

    Jeong, Younhee; Jung, Mi Jung

    2016-01-01

    The authors reported the association between motivation and self-management behavior of individuals with chronic low back pain after adjusting control variables using hierarchical multiple regression (). This article describes details of the hierarchical regression applying the actual data used in the article by , including how to test assumptions, run the statistical tests, and report the results. PMID:27648796

  3. Cross-Validation, Shrinkage, and Multiple Regression.

    ERIC Educational Resources Information Center

    Hynes, Kevin

    One aspect of multiple regression--the shrinkage of the multiple correlation coefficient on cross-validation is reviewed. The paper consists of four sections. In section one, the distinction between a fixed and a random multiple regression model is made explicit. In section two, the cross-validation paradigm and an explanation for the occurrence…

  4. Incremental Net Effects in Multiple Regression

    ERIC Educational Resources Information Center

    Lipovetsky, Stan; Conklin, Michael

    2005-01-01

    A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…

  5. Laplace regression with censored data.

    PubMed

    Bottai, Matteo; Zhang, Jiajia

    2010-08-01

    We consider a regression model where the error term is assumed to follow a type of asymmetric Laplace distribution. We explore its use in the estimation of conditional quantiles of a continuous outcome variable given a set of covariates in the presence of random censoring. Censoring may depend on covariates. Estimation of the regression coefficients is carried out by maximizing a non-differentiable likelihood function. In the scenarios considered in a simulation study, the Laplace estimator showed correct coverage and shorter computation time than the alternative methods considered, some of which occasionally failed to converge. We illustrate the use of Laplace regression with an application to survival time in patients with small cell lung cancer.

  6. Regression Calibration with Heteroscedastic Error Variance

    PubMed Central

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187

  7. [From clinical judgment to linear regression model.

    PubMed

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.

  8. Ecological correlation between arsenic level in well water and age-adjusted mortality from malignant neoplasms

    SciTech Connect

    Chen, C.J.; Wang, C.J. )

    1990-09-01

    A significant dose-response relation between ingested arsenic and several cancers has recently been reported in four townships of the endemic area of blackfoot disease, a unique peripheral artery disease related to the chronic arsenic exposure in southwestern Taiwan. This study was carried out to examine ecological correlations between arsenic level of well water and mortality from various malignant neoplasms in 314 precincts and townships of Taiwan. The arsenic content in water of 83,656 wells was determined by a standard mercuric bromide stain method from 1974 to 1976, while mortality rates of 21 malignant neoplasms among residents in study precincts and townships from 1972 to 1983 were standardized to the world population in 1976. A significant association with the arsenic level in well water was observed for cancers of the liver, nasal cavity, lung, skin, bladder and kidney in both males and females as well as for the prostate cancer in males. These associations remained significant after adjusting for indices of urbanization and industrialization through multiple regression analyses. The multivariate-adjusted regression coefficient indicating an increase in age-adjusted mortality per 100,000 person-years for every 0.1 ppm increase in arsenic level of well water was 6.8 and 2.0, 0.7 and 0.4, 5.3 and 5.3, 0.9 and 1.0, 3.9 and 4.2, as well as 1.1 and 1.7, respectively, in males and females for cancers of the liver, nasal cavity, lung, skin, bladder and kidney. The multivariate-adjusted regression coefficient for the prostate cancer was 0.5. These weighted regression coefficients were found to increase or remain unchanged in further analyses in which only 170 southwestern townships were included.

  9. Stepwise multiple regression method of greenhouse gas emission modeling in the energy sector in Poland.

    PubMed

    Kolasa-Wiecek, Alicja

    2015-04-01

    The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption.

  10. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  11. NCCS Regression Test Harness

    SciTech Connect

    Tharrington, Arnold N.

    2015-09-09

    The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

  12. Relationship between Multiple Regression and Selected Multivariable Methods.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…

  13. Shaft adjuster

    DOEpatents

    Harry, H.H.

    1988-03-11

    Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.

  14. Shaft adjuster

    DOEpatents

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  15. The Geometry of Enhancement in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.

    2011-01-01

    In linear multiple regression, "enhancement" is said to occur when R[superscript 2] = b[prime]r greater than r[prime]r, where b is a p x 1 vector of standardized regression coefficients and r is a p x 1 vector of correlations between a criterion y and a set of standardized regressors, x. When p = 1 then b [is congruent to] r and enhancement cannot…

  16. Fully Regressive Melanoma

    PubMed Central

    Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé

    2016-01-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis.

  17. Fully Regressive Melanoma

    PubMed Central

    Ehrsam, Eric; Kallini, Joseph R.; Lebas, Damien; Modiano, Philippe; Cotten, Hervé

    2016-01-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418

  18. Risk Adjustment for Medicare Total Knee Arthroplasty Bundled Payments.

    PubMed

    Clement, R Carter; Derman, Peter B; Kheir, Michael M; Soo, Adrianne E; Flynn, David N; Levin, L Scott; Fleisher, Lee

    2016-09-01

    The use of bundled payments is growing because of their potential to align providers and hospitals on the goal of cost reduction. However, such gain sharing could incentivize providers to "cherry-pick" more profitable patients. Risk adjustment can prevent this unintended consequence, yet most bundling programs include minimal adjustment techniques. This study was conducted to determine how bundled payments for total knee arthroplasty (TKA) should be adjusted for risk. The authors collected financial data for all Medicare patients (age≥65 years) undergoing primary unilateral TKA at an academic center over a period of 2 years (n=941). Multivariate regression was performed to assess the effect of patient factors on the costs of acute inpatient care, including unplanned 30-day readmissions. This analysis mirrors a bundling model used in the Medicare Bundled Payments for Care Improvement initiative. Increased age, American Society of Anesthesiologists (ASA) class, and the presence of a Medicare Major Complications/Comorbid Conditions (MCC) modifier (typically representing major complications) were associated with increased costs (regression coefficients, $57 per year; $729 per ASA class beyond I; and $3122 for patients meeting MCC criteria; P=.003, P=.001, and P<.001, respectively). Differences in costs were not associated with body mass index, sex, or race. If the results are generalizable, Medicare bundled payments for TKA encompassing acute inpatient care should be adjusted upward by the stated amounts for older patients, those with elevated ASA class, and patients meeting MCC criteria. This is likely an underestimate for many bundling models, including the Comprehensive Care for Joint Replacement program, incorporating varying degrees of postacute care. Failure to adjust for factors that affect costs may create adverse incentives, creating barriers to care for certain patient populations. [Orthopedics. 2016; 39(5):e911-e916.]. PMID:27359282

  19. Meteorological adjustment of yearly mean values for air pollutant concentration comparison

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.; Neustadter, H. E.

    1976-01-01

    Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.

  20. On Using the Average Intercorrelation Among Predictor Variables and Eigenvector Orientation to Choose a Regression Solution.

    ERIC Educational Resources Information Center

    Mugrage, Beverly; And Others

    Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…

  1. Commonality Analysis for the Regression Case.

    ERIC Educational Resources Information Center

    Murthy, Kavita

    Commonality analysis is a procedure for decomposing the coefficient of determination (R superscript 2) in multiple regression analyses into the percent of variance in the dependent variable associated with each independent variable uniquely, and the proportion of explained variance associated with the common effects of predictors in various…

  2. Improved Regression Calibration

    ERIC Educational Resources Information Center

    Skrondal, Anders; Kuha, Jouni

    2012-01-01

    The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration…

  3. Prediction in Multiple Regression.

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2000-01-01

    Presents the concept of prediction via multiple regression (MR) and discusses the assumptions underlying multiple regression analyses. Also discusses shrinkage, cross-validation, and double cross-validation of prediction equations and describes how to calculate confidence intervals around individual predictions. (SLD)

  4. Morse–Smale Regression

    SciTech Connect

    Gerber, Samuel; Rubel, Oliver; Bremer, Peer -Timo; Pascucci, Valerio; Whitaker, Ross T.

    2012-01-19

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

  5. Generalized skew coefficients for flood-frequency analysis in Minnesota

    USGS Publications Warehouse

    Lorenz, D.L.

    1997-01-01

    This report presents an evaluation of generalized skew coefficients used in flood-frequency analysis. Station skew coefficients were computed for 267 long-term stream-gaging stations in Minnesota and the surrounding states of Iowa, North and South Dakota, Wisconsin, and the provinces of Manitoba and Ontario, Canada. Generalized skew coefficients were computed from station skew coefficients using a locally weighted regression technique. The resulting regression trend surface was the generalized skew coefficient map, except for the North Shore area, and has a mean square error of 0.182.

  6. Regression problems for magnitudes

    NASA Astrophysics Data System (ADS)

    Castellaro, S.; Mulargia, F.; Kagan, Y. Y.

    2006-06-01

    Least-squares linear regression is so popular that it is sometimes applied without checking whether its basic requirements are satisfied. In particular, in studying earthquake phenomena, the conditions (a) that the uncertainty on the independent variable is at least one order of magnitude smaller than the one on the dependent variable, (b) that both data and uncertainties are normally distributed and (c) that residuals are constant are at times disregarded. This may easily lead to wrong results. As an alternative to least squares, when the ratio between errors on the independent and the dependent variable can be estimated, orthogonal regression can be applied. We test the performance of orthogonal regression in its general form against Gaussian and non-Gaussian data and error distributions and compare it with standard least-square regression. General orthogonal regression is found to be superior or equal to the standard least squares in all the cases investigated and its use is recommended. We also compare the performance of orthogonal regression versus standard regression when, as often happens in the literature, the ratio between errors on the independent and the dependent variables cannot be estimated and is arbitrarily set to 1. We apply these results to magnitude scale conversion, which is a common problem in seismology, with important implications in seismic hazard evaluation, and analyse it through specific tests. Our analysis concludes that the commonly used standard regression may induce systematic errors in magnitude conversion as high as 0.3-0.4, and, even more importantly, this can introduce apparent catalogue incompleteness, as well as a heavy bias in estimates of the slope of the frequency-magnitude distributions. All this can be avoided by using the general orthogonal regression in magnitude conversions.

  7. Multivariate Regression with Calibration*

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-01-01

    We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861

  8. Metamorphic geodesic regression.

    PubMed

    Hong, Yi; Joshi, Sarang; Sanchez, Mar; Styner, Martin; Niethammer, Marc

    2012-01-01

    We propose a metamorphic geodesic regression approach approximating spatial transformations for image time-series while simultaneously accounting for intensity changes. Such changes occur for example in magnetic resonance imaging (MRI) studies of the developing brain due to myelination. To simplify computations we propose an approximate metamorphic geodesic regression formulation that only requires pairwise computations of image metamorphoses. The approximated solution is an appropriately weighted average of initial momenta. To obtain initial momenta reliably, we develop a shooting method for image metamorphosis.

  9. Radial basis function networks with linear interval regression weights for symbolic interval data.

    PubMed

    Su, Shun-Feng; Chuang, Chen-Chia; Tao, C W; Jeng, Jin-Tsong; Hsiao, Chih-Ching

    2012-02-01

    This paper introduces a new structure of radial basis function networks (RBFNs) that can successfully model symbolic interval-valued data. In the proposed structure, to handle symbolic interval data, the Gaussian functions required in the RBFNs are modified to consider interval distance measure, and the synaptic weights of the RBFNs are replaced by linear interval regression weights. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the center and range of the interval-valued data are considered. In addition, in the proposed approach, two stages of learning mechanisms are proposed. In stage 1, an initial structure (i.e., the number of hidden nodes and the adjustable parameters of radial basis functions) of the proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In stage 2, a gradient-descent kind of learning algorithm is applied to fine-tune the parameters of the radial basis function and the coefficients of the linear interval regression weights. Various experiments are conducted, and the average behavior of the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are considered as the performance index. The results clearly show the effectiveness of the proposed structure.

  10. Gas-film coefficients for streams

    USGS Publications Warehouse

    Rathbun, R.E.; Tai, D.Y.

    1983-01-01

    Equations for predicting the gas-film coefficient for the volatilization of organic solutes from streams are developed. The film coefficient is a function of windspeed and water temperature. The dependence of the coefficient on windspeed is determined from published information on the evaporation of water from a canal. The dependence of the coefficient on temperature is determined from laboratory studies on the evaporation of water. Procedures for adjusting the coefficients for different organic solutes are based on the molecular diffusion coefficient and the molecular weight. The molecular weight procedure is easiest to use because of the availability of molecular weights. However, the theoretical basis of the procedure is questionable. The diffusion coefficient procedure is supported by considerable data. Questions, however, remain regarding the exact dependence of the film coefficint on the diffusion coefficient. It is suggested that the diffusion coefficient procedure with a 0.68-power dependence be used when precise estimate of the gas-film coefficient are needed and that the molecular weight procedure be used when only approximate estimates are needed.

  11. Latent Regression Analysis.

    PubMed

    Tarpey, Thaddeus; Petkova, Eva

    2010-07-01

    Finite mixture models have come to play a very prominent role in modelling data. The finite mixture model is predicated on the assumption that distinct latent groups exist in the population. The finite mixture model therefore is based on a categorical latent variable that distinguishes the different groups. Often in practice distinct sub-populations do not actually exist. For example, disease severity (e.g. depression) may vary continuously and therefore, a distinction of diseased and not-diseased may not be based on the existence of distinct sub-populations. Thus, what is needed is a generalization of the finite mixture's discrete latent predictor to a continuous latent predictor. We cast the finite mixture model as a regression model with a latent Bernoulli predictor. A latent regression model is proposed by replacing the discrete Bernoulli predictor by a continuous latent predictor with a beta distribution. Motivation for the latent regression model arises from applications where distinct latent classes do not exist, but instead individuals vary according to a continuous latent variable. The shapes of the beta density are very flexible and can approximate the discrete Bernoulli distribution. Examples and a simulation are provided to illustrate the latent regression model. In particular, the latent regression model is used to model placebo effect among drug treated subjects in a depression study. PMID:20625443

  12. Semiparametric Regression Pursuit.

    PubMed

    Huang, Jian; Wei, Fengrong; Ma, Shuangge

    2012-10-01

    The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing methods for the estimation in this model is to assume a priori that it is known which covariates have a linear effect and which do not. However, in applied work, this is rarely known in advance. We consider the problem of estimation in the partially linear models without assuming a priori which covariates have linear effects. We propose a semiparametric regression pursuit method for identifying the covariates with a linear effect. Our proposed method is a penalized regression approach using a group minimax concave penalty. Under suitable conditions we show that the proposed approach is model-pursuit consistent, meaning that it can correctly determine which covariates have a linear effect and which do not with high probability. The performance of the proposed method is evaluated using simulation studies, which support our theoretical results. A real data example is used to illustrated the application of the proposed method. PMID:23559831

  13. MLREG, stepwise multiple linear regression program

    SciTech Connect

    Carder, J.H.

    1981-09-01

    This program is written in FORTRAN for an IBM computer and performs multiple linear regressions according to a stepwise procedure. The program transforms and combines old variables into new variables, prints input and transformed data, sums, raw sums or squares, residual sum of squares, means and standard deviations, correlation coefficients, regression results at each step, ANOVA at each step, and predicted response results at each step. This package contains an EXEC used to execute the program,sample input data and output listing, source listing, documentation, and card decks containing the EXEC sample input, and FORTRAN source.

  14. [Understanding logistic regression].

    PubMed

    El Sanharawi, M; Naudet, F

    2013-10-01

    Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.

  15. Population-Sample Regression in the Estimation of Population Proportions

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2006-01-01

    Focusing on a single sample obtained randomly with replacement from a single population, this article examines the regression of population on sample proportions and develops an unbiased estimator of the square of the correlation between them. This estimator turns out to be the regression coefficient. Use of the squared-correlation estimator as a…

  16. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  17. Continuous water-quality monitoring and regression analysis to estimate constituent concentrations and loads in the Red River of the North at Fargo and Grand Forks, North Dakota, 2003-12

    USGS Publications Warehouse

    Galloway, Joel M.

    2014-01-01

    The Red River of the North (hereafter referred to as “Red River”) Basin is an important hydrologic region where water is a valuable resource for the region’s economy. Continuous water-quality monitors have been operated by the U.S. Geological Survey, in cooperation with the North Dakota Department of Health, Minnesota Pollution Control Agency, City of Fargo, City of Moorhead, City of Grand Forks, and City of East Grand Forks at the Red River at Fargo, North Dakota, from 2003 through 2012 and at Grand Forks, N.Dak., from 2007 through 2012. The purpose of the monitoring was to provide a better understanding of the water-quality dynamics of the Red River and provide a way to track changes in water quality. Regression equations were developed that can be used to estimate concentrations and loads for dissolved solids, sulfate, chloride, nitrate plus nitrite, total phosphorus, and suspended sediment using explanatory variables such as streamflow, specific conductance, and turbidity. Specific conductance was determined to be a significant explanatory variable for estimating dissolved solids concentrations at the Red River at Fargo and Grand Forks. The regression equations provided good relations between dissolved solid concentrations and specific conductance for the Red River at Fargo and at Grand Forks, with adjusted coefficients of determination of 0.99 and 0.98, respectively. Specific conductance, log-transformed streamflow, and a seasonal component were statistically significant explanatory variables for estimating sulfate in the Red River at Fargo and Grand Forks. Regression equations provided good relations between sulfate concentrations and the explanatory variables, with adjusted coefficients of determination of 0.94 and 0.89, respectively. For the Red River at Fargo and Grand Forks, specific conductance, streamflow, and a seasonal component were statistically significant explanatory variables for estimating chloride. For the Red River at Grand Forks, a time

  18. Estimating Tortuosity Coefficients Based on Hydraulic Conductivity.

    PubMed

    Carey, Grant R; McBean, Edward A; Feenstra, Stan

    2016-07-01

    While the tortuosity coefficient is commonly estimated using an expression based on total porosity, this relationship is demonstrated to not be applicable (and thus is often misapplied) over a broad range of soil textures. The fundamental basis for a correlation between the apparent diffusion tortuosity coefficient and hydraulic conductivity is demonstrated, although such a relationship is not typically considered. An empirical regression for estimating the tortuosity coefficient based on hydraulic conductivity for saturated, unconsolidated soil is derived based on results from 14 previously reported diffusion experiments performed with a broad range of soil textures. Analyses of these experimental results confirm that total porosity is a poor predictor for the tortuosity coefficient over a large range of soil textures. The apparent diffusion tortuosity coefficient is more reliably estimated based on hydraulic conductivity. PMID:27315019

  19. Practical Session: Logistic Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  20. Explorations in Statistics: Regression

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2011-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…

  1. Modern Regression Discontinuity Analysis

    ERIC Educational Resources Information Center

    Bloom, Howard S.

    2012-01-01

    This article provides a detailed discussion of the theory and practice of modern regression discontinuity (RD) analysis for estimating the effects of interventions or treatments. Part 1 briefly chronicles the history of RD analysis and summarizes its past applications. Part 2 explains how in theory an RD analysis can identify an average effect of…

  2. Mechanisms of neuroblastoma regression

    PubMed Central

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  3. Realization of Ridge Regression in MATLAB

    NASA Astrophysics Data System (ADS)

    Dimitrov, S.; Kovacheva, S.; Prodanova, K.

    2008-10-01

    The least square estimator (LSE) of the coefficients in the classical linear regression models is unbiased. In the case of multicollinearity of the vectors of design matrix, LSE has very big variance, i.e., the estimator is unstable. A more stable estimator (but biased) can be constructed using ridge-estimator (RE). In this paper the basic methods of obtaining of Ridge-estimators and numerical procedures of its realization in MATLAB are considered. An application to Pharmacokinetics problem is considered.

  4. Calculation of Solar Radiation by Using Regression Methods

    NASA Astrophysics Data System (ADS)

    Kızıltan, Ö.; Şahin, M.

    2016-04-01

    In this study, solar radiation was estimated at 53 location over Turkey with varying climatic conditions using the Linear, Ridge, Lasso, Smoother, Partial least, KNN and Gaussian process regression methods. The data of 2002 and 2003 years were used to obtain regression coefficients of relevant methods. The coefficients were obtained based on the input parameters. Input parameters were month, altitude, latitude, longitude and landsurface temperature (LST).The values for LST were obtained from the data of the National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer (NOAA-AVHRR) satellite. Solar radiation was calculated using obtained coefficients in regression methods for 2004 year. The results were compared statistically. The most successful method was Gaussian process regression method. The most unsuccessful method was lasso regression method. While means bias error (MBE) value of Gaussian process regression method was 0,274 MJ/m2, root mean square error (RMSE) value of method was calculated as 2,260 MJ/m2. The correlation coefficient of related method was calculated as 0,941. Statistical results are consistent with the literature. Used the Gaussian process regression method is recommended for other studies.

  5. Ridge Regression Signal Processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  6. Linear regression models of floor surface parameters on friction between Neolite and quarry tiles.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Grönqvist, Raoul; Hirvonen, Mikko

    2010-01-01

    For slips and falls, friction is widely used as an indicator of surface slipperiness. Surface parameters, including surface roughness and waviness, were shown to influence friction by correlating individual surface parameters with the measured friction. A collective input from multiple surface parameters as a predictor of friction, however, could provide a broader perspective on the contributions from all the surface parameters evaluated. The objective of this study was to develop regression models between the surface parameters and measured friction. The dynamic friction was measured using three different mixtures of glycerol and water as contaminants. Various surface roughness and waviness parameters were measured using three different cut-off lengths. The regression models indicate that the selected surface parameters can predict the measured friction coefficient reliably in most of the glycerol concentrations and cut-off lengths evaluated. The results of the regression models were, in general, consistent with those obtained from the correlation between individual surface parameters and the measured friction in eight out of nine conditions evaluated in this experiment. A hierarchical regression model was further developed to evaluate the cumulative contributions of the surface parameters in the final iteration by adding these parameters to the regression model one at a time from the easiest to measure to the most difficult to measure and evaluating their impacts on the adjusted R(2) values. For practical purposes, the surface parameter R(a) alone would account for the majority of the measured friction even if it did not reach a statistically significant level in some of the regression models.

  7. Regional Regression Equations to Estimate Flow-Duration Statistics at Ungaged Stream Sites in Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2010-01-01

    contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.

  8. Combining Genomic and Genealogical Information in a Reproducing Kernel Hilbert Spaces Regression Model for Genome-Enabled Predictions in Dairy Cattle

    PubMed Central

    Rodríguez-Ramilo, Silvia Teresa; García-Cortés, Luis Alberto; González-Recio, Óscar

    2014-01-01

    Genome-enhanced genotypic evaluations are becoming popular in several livestock species. For this purpose, the combination of the pedigree-based relationship matrix with a genomic similarities matrix between individuals is a common approach. However, the weight placed on each matrix has been so far established with ad hoc procedures, without formal estimation thereof. In addition, when using marker- and pedigree-based relationship matrices together, the resulting combined relationship matrix needs to be adjusted to the same scale in reference to the base population. This study proposes a semi-parametric Bayesian method for combining marker- and pedigree-based information on genome-enabled predictions. A kernel matrix from a reproducing kernel Hilbert spaces regression model was used to combine genomic and genealogical information in a semi-parametric scenario, avoiding inversion and adjustment complications. In addition, the weights on marker- versus pedigree-based information were inferred from a Bayesian model with Markov chain Monte Carlo. The proposed method was assessed involving a large number of SNPs and a large reference population. Five phenotypes, including production and type traits of dairy cattle were evaluated. The reliability of the genome-based predictions was assessed using the correlation, regression coefficient and mean squared error between the predicted and observed values. The results indicated that when a larger weight was given to the pedigree-based relationship matrix the correlation coefficient was lower than in situations where more weight was given to genomic information. Importantly, the posterior means of the inferred weight were near the maximum of 1. The behavior of the regression coefficient and the mean squared error was similar to the performance of the correlation, that is, more weight to the genomic information provided a regression coefficient closer to one and a smaller mean squared error. Our results also indicated a greater

  9. Investigating the Performance of Alternate Regression Weights by Studying All Possible Criteria in Regression Models with a Fixed Set of Predictors

    ERIC Educational Resources Information Center

    Waller, Niels; Jones, Jeff

    2011-01-01

    We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n x 1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a…

  10. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  11. Structural regression trees

    SciTech Connect

    Kramer, S.

    1996-12-31

    In many real-world domains the task of machine learning algorithms is to learn a theory for predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems. SRT integrates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy.

  12. The intraclass correlation coefficient: distribution-free definition and test.

    PubMed

    Commenges, D; Jacqmin, H

    1994-06-01

    A definition of the intraclass correlation coefficient is given on the basis of a general class of random effect model. The conventional intraclass correlation coefficient and the intracluster correlation coefficient for binary data are both particular cases of the generalized coefficient. We derive the score test for the hypothesis of null intraclass correlation in the exponential family. The statistic does not depend on the particular distribution in this family and is related to the pairwise correlation coefficient. The test can be adjusted for explanatory variables.

  13. CSWS-related autistic regression versus autistic regression without CSWS.

    PubMed

    Tuchman, Roberto

    2009-08-01

    Continuous spike-waves during slow-wave sleep (CSWS) and Landau-Kleffner syndrome (LKS) are two clinical epileptic syndromes that are associated with the electroencephalography (EEG) pattern of electrical status epilepticus during slow wave sleep (ESES). Autistic regression occurs in approximately 30% of children with autism and is associated with an epileptiform EEG in approximately 20%. The behavioral phenotypes of CSWS, LKS, and autistic regression overlap. However, the differences in age of regression, degree and type of regression, and frequency of epilepsy and EEG abnormalities suggest that these are distinct phenotypes. CSWS with autistic regression is rare, as is autistic regression associated with ESES. The pathophysiology and as such the treatment implications for children with CSWS and autistic regression are distinct from those with autistic regression without CSWS.

  14. Advantages of geographically weighted regression for modeling benthic substrate in two Greater Yellowstone Ecosystem streams

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Strager, Michael P.; Welsh, Stuart

    2013-01-01

    Stream habitat assessments are commonplace in fish management, and often involve nonspatial analysis methods for quantifying or predicting habitat, such as ordinary least squares regression (OLS). Spatial relationships, however, often exist among stream habitat variables. For example, water depth, water velocity, and benthic substrate sizes within streams are often spatially correlated and may exhibit spatial nonstationarity or inconsistency in geographic space. Thus, analysis methods should address spatial relationships within habitat datasets. In this study, OLS and a recently developed method, geographically weighted regression (GWR), were used to model benthic substrate from water depth and water velocity data at two stream sites within the Greater Yellowstone Ecosystem. For data collection, each site was represented by a grid of 0.1 m2 cells, where actual values of water depth, water velocity, and benthic substrate class were measured for each cell. Accuracies of regressed substrate class data by OLS and GWR methods were calculated by comparing maps, parameter estimates, and determination coefficient r 2. For analysis of data from both sites, Akaike’s Information Criterion corrected for sample size indicated the best approximating model for the data resulted from GWR and not from OLS. Adjusted r 2 values also supported GWR as a better approach than OLS for prediction of substrate. This study supports GWR (a spatial analysis approach) over nonspatial OLS methods for prediction of habitat for stream habitat assessments.

  15. Adjusting effect estimates for unmeasured confounding with validation data using propensity score calibration

    PubMed Central

    Stürmer, Til; Schneeweiss, Sebastian; Avorn, Jerry; Glynn, Robert J

    2006-01-01

    Often important confounders are not available in studies. Sensitivity analyses based on the relation of single, but not multiple, unmeasured confounders with an exposure of interest in a separate validation study have been proposed. The authors controlled for measured confounding in the main cohort using propensity scores (PS) and addressed unmeasured confounding by estimating two additional PS in a validation study. The ‘error-prone’ PS exclusively used information available in the main cohort. The ‘gold-standard’ PS additionally included covariates available only in the validation study. Based on these two PS in the validation study, regression calibration was applied to adjust regression coefficients. This propensity score calibration (PSC) adjusts for unmeasured confounding in cohort studies with validation data under certain, usually untestable, assumptions. PSC was used to assess nonsteroidal antiinflammatory drugs (NSAID) and 1-year mortality in a large cohort of elderly. ‘Traditional’ adjustment resulted in a relative risk (RR) in NSAID users of 0.80 (95% confidence interval: 0.77–0.83) compared to an unadjusted RR of 0.68 (0.66–0.71). Application of PSC resulted in a more plausible RR of 1.06 (1.00–1.12). Until validity and limitations of PSC have been assessed in different settings, the method should be seen as a sensitivity analysis. PMID:15987725

  16. Computing measures of explained variation for logistic regression models.

    PubMed

    Mittlböck, M; Schemper, M

    1999-01-01

    The proportion of explained variation (R2) is frequently used in the general linear model but in logistic regression no standard definition of R2 exists. We present a SAS macro which calculates two R2-measures based on Pearson and on deviance residuals for logistic regression. Also, adjusted versions for both measures are given, which should prevent the inflation of R2 in small samples. PMID:10195643

  17. Assessing risk factors for periodontitis using regression

    NASA Astrophysics Data System (ADS)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  18. Misuse of correlation and regression in three medical journals.

    PubMed Central

    Porter, A M

    1999-01-01

    Errors relating to the use of the correlation coefficient and bivariate linear regression are often to be found in medical publications. This paper reports a literature search to define the problems. All the papers and letters published in the British Medical Journal, The Lancet and the New England Journal of Medicine during 1997 were screened for examples. Fifteen categories of errors were identified of which eight were important or common. These included: failure to define clearly the relevant sample number; the display of potentially misleading scatterplots; attachment of unwarranted importance to significance levels; and the omission of confidence intervals for correlation coefficients and around regression lines. PMID:10396255

  19. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  20. Wild bootstrap for quantile regression.

    PubMed

    Feng, Xingdong; He, Xuming; Hu, Jianhua

    2011-12-01

    The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points.

  1. ADJUSTABLE DOUBLE PULSE GENERATOR

    DOEpatents

    Gratian, J.W.; Gratian, A.C.

    1961-08-01

    >A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)

  2. Locoregional Control of Non-Small Cell Lung Cancer in Relation to Automated Early Assessment of Tumor Regression on Cone Beam Computed Tomography

    SciTech Connect

    Brink, Carsten; Bernchou, Uffe; Bertelsen, Anders; Hansen, Olfred; Schytte, Tine; Bentzen, Soren M.

    2014-07-15

    Purpose: Large interindividual variations in volume regression of non-small cell lung cancer (NSCLC) are observable on standard cone beam computed tomography (CBCT) during fractionated radiation therapy. Here, a method for automated assessment of tumor volume regression is presented and its potential use in response adapted personalized radiation therapy is evaluated empirically. Methods and Materials: Automated deformable registration with calculation of the Jacobian determinant was applied to serial CBCT scans in a series of 99 patients with NSCLC. Tumor volume at the end of treatment was estimated on the basis of the first one third and two thirds of the scans. The concordance between estimated and actual relative volume at the end of radiation therapy was quantified by Pearson's correlation coefficient. On the basis of the estimated relative volume, the patients were stratified into 2 groups having volume regressions below or above the population median value. Kaplan-Meier plots of locoregional disease-free rate and overall survival in the 2 groups were used to evaluate the predictive value of tumor regression during treatment. Cox proportional hazards model was used to adjust for other clinical characteristics. Results: Automatic measurement of the tumor regression from standard CBCT images was feasible. Pearson's correlation coefficient between manual and automatic measurement was 0.86 in a sample of 9 patients. Most patients experienced tumor volume regression, and this could be quantified early into the treatment course. Interestingly, patients with pronounced volume regression had worse locoregional tumor control and overall survival. This was significant on patient with non-adenocarcinoma histology. Conclusions: Evaluation of routinely acquired CBCT images during radiation therapy provides biological information on the specific tumor. This could potentially form the basis for personalized response adaptive therapy.

  3. Effect of Risk Adjustment Method on Comparisons of Health Care Utilization Between Complementary and Alternative Medicine Users and Nonusers

    PubMed Central

    Gerkovich, Mary M.; Cherkin, Daniel C.; Deyo, Richard A.; Sherman, Karen J.; Lafferty, William E.

    2013-01-01

    Abstract Objectives Complementary and alternative medicine (CAM) providers are becoming more integrated into the United States health care system. Because patients self-select CAM use, risk adjustment is needed to make the groups more comparable when analyzing utilization. This study examined how the choice of risk adjustment method affects assessment of CAM use on overall health care utilization. Design and subjects Insurance claims data for 2000–2003 from Washington State, which mandates coverage of CAM providers, were analyzed. Three (3) risk adjustment methods were compared in patients with musculoskeletal conditions: Adjusted Clinical Groups (ACG), Diagnostic Cost Groups (DCG), and the Charlson Index. Relative Value Units (RVUs) were used as a proxy for expenditures. Two (2) sets of median regression models were created: prospective, which used risk adjustments from the previous year to predict RVU in the subsequent year, and concurrent, which used risk adjustment measures to predict RVU in the same year. Results The sample included 92,474 claimants. Prospective models showed little difference in the effect of CAM use on RVU among the three risk adjustment methods, and all models had low predictive power (R2 ≤0.05). In the concurrent models, coefficients were similar in direction and magnitude for all risk adjustment methods, but in some models the predicted effect of CAM use on RVU differed by as much as double between methods. Results of DCG and ACG models were similar and were stronger than Charlson models. Conclusions Choice of risk adjustment method may have a modest effect on the outcome of interest. PMID:23036140

  4. Use and Interpretation of Effect Sizes and Structure Coefficients in Mathematics and Science Education Journals.

    ERIC Educational Resources Information Center

    Lowe, Terry J.

    Reporting and interpretation of effect sizes and structure coefficients in multiple regression results are important for good practice. The purpose of this study was to investigate the use and interpretation of effect sizes (ES) and structure coefficients in multiple regression analyses in two mathematics and science education journals. Published…

  5. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  6. Risk-adjusted antibiotic consumption in 34 public acute hospitals in Ireland, 2006 to 2014

    PubMed Central

    Oza, Ajay; Donohue, Fionnuala; Johnson, Howard; Cunney, Robert

    2016-01-01

    As antibiotic consumption rates between hospitals can vary depending on the characteristics of the patients treated, risk-adjustment that compensates for the patient-based variation is required to assess the impact of any stewardship measures. The aim of this study was to investigate the usefulness of patient-based administrative data variables for adjusting aggregate hospital antibiotic consumption rates. Data on total inpatient antibiotics and six broad subclasses were sourced from 34 acute hospitals from 2006 to 2014. Aggregate annual patient administration data were divided into explanatory variables, including major diagnostic categories, for each hospital. Multivariable regression models were used to identify factors affecting antibiotic consumption. Coefficient of variation of the root mean squared errors (CV-RMSE) for the total antibiotic usage model was very good (11%), however, the value for two of the models was poor (> 30%). The overall inpatient antibiotic consumption increased from 82.5 defined daily doses (DDD)/100 bed-days used in 2006 to 89.2 DDD/100 bed-days used in 2014; the increase was not significant after risk-adjustment. During the same period, consumption of carbapenems increased significantly, while usage of fluoroquinolones decreased. In conclusion, patient-based administrative data variables are useful for adjusting hospital antibiotic consumption rates, although additional variables should also be employed. PMID:27541730

  7. Risk-adjusted antibiotic consumption in 34 public acute hospitals in Ireland, 2006 to 2014.

    PubMed

    Oza, Ajay; Donohue, Fionnuala; Johnson, Howard; Cunney, Robert

    2016-08-11

    As antibiotic consumption rates between hospitals can vary depending on the characteristics of the patients treated, risk-adjustment that compensates for the patient-based variation is required to assess the impact of any stewardship measures. The aim of this study was to investigate the usefulness of patient-based administrative data variables for adjusting aggregate hospital antibiotic consumption rates. Data on total inpatient antibiotics and six broad subclasses were sourced from 34 acute hospitals from 2006 to 2014. Aggregate annual patient administration data were divided into explanatory variables, including major diagnostic categories, for each hospital. Multivariable regression models were used to identify factors affecting antibiotic consumption. Coefficient of variation of the root mean squared errors (CV-RMSE) for the total antibiotic usage model was very good (11%), however, the value for two of the models was poor (> 30%). The overall inpatient antibiotic consumption increased from 82.5 defined daily doses (DDD)/100 bed-days used in 2006 to 89.2 DDD/100 bed-days used in 2014; the increase was not significant after risk-adjustment. During the same period, consumption of carbapenems increased significantly, while usage of fluoroquinolones decreased. In conclusion, patient-based administrative data variables are useful for adjusting hospital antibiotic consumption rates, although additional variables should also be employed. PMID:27541730

  8. Household water treatment in developing countries: comparing different intervention types using meta-regression.

    PubMed

    Hunter, Paul R

    2009-12-01

    Household water treatment (HWT) is being widely promoted as an appropriate intervention for reducing the burden of waterborne disease in poor communities in developing countries. A recent study has raised concerns about the effectiveness of HWT, in part because of concerns over the lack of blinding and in part because of considerable heterogeneity in the reported effectiveness of randomized controlled trials. This study set out to attempt to investigate the causes of this heterogeneity and so identify factors associated with good health gains. Studies identified in an earlier systematic review and meta-analysis were supplemented with more recently published randomized controlled trials. A total of 28 separate studies of randomized controlled trials of HWT with 39 intervention arms were included in the analysis. Heterogeneity was studied using the "metareg" command in Stata. Initial analyses with single candidate predictors were undertaken and all variables significant at the P < 0.2 level were included in a final regression model. Further analyses were done to estimate the effect of the interventions over time by MonteCarlo modeling using @Risk and the parameter estimates from the final regression model. The overall effect size of all unblinded studies was relative risk = 0.56 (95% confidence intervals 0.51-0.63), but after adjusting for bias due to lack of blinding the effect size was much lower (RR = 0.85, 95% CI = 0.76-0.97). Four main variables were significant predictors of effectiveness of intervention in a multipredictor meta regression model: Log duration of study follow-up (regression coefficient of log effect size = 0.186, standard error (SE) = 0.072), whether or not the study was blinded (coefficient 0.251, SE 0.066) and being conducted in an emergency setting (coefficient -0.351, SE 0.076) were all significant predictors of effect size in the final model. Compared to the ceramic filter all other interventions were much less effective (Biosand 0.247, 0

  9. Regression Models For Saffron Yields in Iran

    NASA Astrophysics Data System (ADS)

    S. H, Sanaeinejad; S. N, Hosseini

    Saffron is an important crop in social and economical aspects in Khorassan Province (Northeast of Iran). In this research wetried to evaluate trends of saffron yield in recent years and to study the relationship between saffron yield and the climate change. A regression analysis was used to predict saffron yield based on 20 years of yield data in Birjand, Ghaen and Ferdows cities.Climatologically data for the same periods was provided by database of Khorassan Climatology Center. Climatologically data includedtemperature, rainfall, relative humidity and sunshine hours for ModelI, and temperature and rainfall for Model II. The results showed the coefficients of determination for Birjand, Ferdows and Ghaen for Model I were 0.69, 0.50 and 0.81 respectively. Also coefficients of determination for the same cities for model II were 0.53, 0.50 and 0.72 respectively. Multiple regression analysisindicated that among weather variables, temperature was the key parameter for variation ofsaffron yield. It was concluded that increasing temperature at spring was the main cause of declined saffron yield during recent years across the province. Finally, yield trend was predicted for the last 5 years using time series analysis.

  10. Adjustment of lifetime risks of space radiation-induced cancer by the healthy worker effect and cancer misclassification.

    PubMed

    Peterson, Leif E; Kovyrshina, Tatiana

    2015-12-01

    Background. The healthy worker effect (HWE) is a source of bias in occupational studies of mortality among workers caused by use of comparative disease rates based on public data, which include mortality of unhealthy members of the public who are screened out of the workplace. For the US astronaut corp, the HWE is assumed to be strong due to the rigorous medical selection and surveillance. This investigation focused on the effect of correcting for HWE on projected lifetime risk estimates for radiation-induced cancer mortality and incidence. Methods. We performed radiation-induced cancer risk assessment using Poisson regression of cancer mortality and incidence rates among Hiroshima and Nagasaki atomic bomb survivors. Regression coefficients were used for generating risk coefficients for the excess absolute, transfer, and excess relative models. Excess lifetime risks (ELR) for radiation exposure and baseline lifetime risks (BLR) were adjusted for the HWE using standardized mortality ratios (SMR) for aviators and nuclear workers who were occupationally exposed to ionizing radiation. We also adjusted lifetime risks by cancer mortality misclassification among atomic bomb survivors. Results. For all cancers combined ("Nonleukemia"), the effect of adjusting the all-cause hazard rate by the simulated quantiles of the all-cause SMR resulted in a mean difference (not percent difference) in ELR of 0.65% and mean difference of 4% for mortality BLR, and mean change of 6.2% in BLR for incidence. The effect of adjusting the excess (radiation-induced) cancer rate or baseline cancer hazard rate by simulated quantiles of cancer-specific SMRs resulted in a mean difference of [Formula: see text] in the all-cancer mortality ELR and mean difference of [Formula: see text] in the mortality BLR. Whereas for incidence, the effect of adjusting by cancer-specific SMRs resulted in a mean change of [Formula: see text] for the all-cancer BLR. Only cancer mortality risks were adjusted by

  11. Characterization of hydrological responses to rainfall and volumetric coefficients on the event scale in rural catchments of the Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Taguas, Encarnación; Nadal-Romero, Estela; Ayuso, José L.; Casalí, Javier; Cid, Patricio; Dafonte, Jorge; Duarte, Antonio C.; Giménez, Rafael; Giráldez, Juan V.; Gómez-Macpherson, Helena; Gómez, José A.; González-Hidalgo, J. Carlos; Lucía, Ana; Mateos, Luciano; Rodríguez-Blanco, M. Luz; Schnabel, Susanne; Serrano-Muela, M. Pilar; Lana-Renault, Noemí; Mercedes Taboada-Castro, M.; Taboada-Castro, M. Teresa

    2016-04-01

    Analysis of storm rainfall-runoff data is essential to improve our understanding of catchment hydrology and to validate models supporting hydrological planning. In a context of climate change, statistical and process-based models are helpful to explore different scenarios which might be represented by simple parameters such as volumetric runoff coefficient. In this work, rainfall-runoff event datasets collected at 17 rural catchments in the Iberian Peninsula were studied. The objectives were: i) to describe hydrological patterns/variability of the relation rainfall-runoff; ii) to explore different methodologies to quantify representative volumetric runoff coefficients. Firstly, the criteria used to define an event were examined in order to standardize the analysis. Linear regression adjustments and statistics of the rainfall-runoff relations were examined to identify possible common patterns. In addition, a principal component analysis was applied to evaluate the variability among catchments based on their physical attributes. Secondly, runoff coefficients at event temporal scale were calculated following different methods. Median, mean, Hawkinś graphic method (Hawkins, 1993), reference values for engineering project of Prevert (TRAGSA, 1994) and the ratio of cumulated runoff and cumulated precipitation of the event that generated runoff (Rcum) were compared. Finally, the relations between the most representative volumetric runoff coefficients with the physical features of the catchments were explored using multiple linear regressions. The mean volumetric runoff coefficient in the studied catchments was 0.18, whereas the median was 0.15, both with variation coefficients greater than 100%. In 6 catchments, rainfall-runoff linear adjustments presented coefficient of determination greater than 0.60 (p < 0.001) while in 5, it was lesser than 0.40. The slope of the linear adjustments for agricultural catchments located in areas with the lowest annual precipitation were

  12. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  13. Quantile regression for climate data

    NASA Astrophysics Data System (ADS)

    Marasinghe, Dilhani Shalika

    Quantile regression is a developing statistical tool which is used to explain the relationship between response and predictor variables. This thesis describes two examples of climatology using quantile regression.Our main goal is to estimate derivatives of a conditional mean and/or conditional quantile function. We introduce a method to handle autocorrelation in the framework of quantile regression and used it with the temperature data. Also we explain some properties of the tornado data which is non-normally distributed. Even though quantile regression provides a more comprehensive view, when talking about residuals with the normality and the constant variance assumption, we would prefer least square regression for our temperature analysis. When dealing with the non-normality and non constant variance assumption, quantile regression is a better candidate for the estimation of the derivative.

  14. Risk-adjusted monitoring of survival times

    SciTech Connect

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.

  15. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  16. Estimating leaf photosynthetic pigments information by stepwise multiple linear regression analysis and a leaf optical model

    NASA Astrophysics Data System (ADS)

    Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei

    2014-10-01

    Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data.

  17. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  18. Validation of the Greek maternal adjustment and maternal attitudes scale for assessing early postpartum adjustment.

    PubMed

    Vivilaki, Victoria G; Dafermos, Vassilis; Gevorgian, Liana; Dimopoulou, Athanasia; Patelarou, Evridiki; Bick, Debra; Tsopelas, Nicholas D; Lionis, Christos

    2012-01-01

    The Maternal Adjustment and Maternal Attitudes Scale is a self- administered scale, designed for use in primary care settings to identify postpartum maternal adjustment problems regarding body image, sex, somatic symptoms, and marital relationships. Women were recruited within four weeks of giving birth. Responses to the Maternal Adjustment and Maternal Attitudes Scale were compared for agreement with responses to the Edinburgh Postnatal Depression Scale as a gold standard. Psychometric measurements included: reliability coefficients, explanatory factor analysis, and confirmatory analysis by linear structural relations. A receiver operating characteristic analysis was carried out to evaluate the global functioning of the scale. Of 300 mothers screened, 121 (40.7%) were experiencing difficulties in maternal adjustment and maternal attitudes. Scores on the Maternal Adjustment and Maternal Attitudes Scale correlated well with those on the Edinburgh Postnatal Depression Scale. The internal consistency of the Maternal Adjustment and Maternal Attitudes Scale, Greek version-tested using Cronbach's alpha coefficient-was 0.859, and that of Guttman split-half coefficient was 0.820. Findings confirmed the multidimensionality of the Maternal Adjustment and Maternal Attitudes Scale, demonstrating a six-factor structure. The area under the receiver operating characteristic curve was 0.610, and the logistic estimate for the threshold score of 57/58 fitted the model sensitivity at 68% and model specificity at 64.6%. Data confirmed that the Greek version of the Maternal Adjustment and Maternal Attitudes Scale is a reliable and valid screening tool for both clinical practice and research purposes to detect postpartum adjustment difficulties.

  19. Continuous water-quality monitoring and regression analysis to estimate constituent concentrations and loads in the Red River of the North at Fargo and Grand Forks, North Dakota, 2003-12

    USGS Publications Warehouse

    Galloway, Joel M.

    2014-01-01

    The Red River of the North (hereafter referred to as “Red River”) Basin is an important hydrologic region where water is a valuable resource for the region’s economy. Continuous water-quality monitors have been operated by the U.S. Geological Survey, in cooperation with the North Dakota Department of Health, Minnesota Pollution Control Agency, City of Fargo, City of Moorhead, City of Grand Forks, and City of East Grand Forks at the Red River at Fargo, North Dakota, from 2003 through 2012 and at Grand Forks, N.Dak., from 2007 through 2012. The purpose of the monitoring was to provide a better understanding of the water-quality dynamics of the Red River and provide a way to track changes in water quality. Regression equations were developed that can be used to estimate concentrations and loads for dissolved solids, sulfate, chloride, nitrate plus nitrite, total phosphorus, and suspended sediment using explanatory variables such as streamflow, specific conductance, and turbidity. Specific conductance was determined to be a significant explanatory variable for estimating dissolved solids concentrations at the Red River at Fargo and Grand Forks. The regression equations provided good relations between dissolved solid concentrations and specific conductance for the Red River at Fargo and at Grand Forks, with adjusted coefficients of determination of 0.99 and 0.98, respectively. Specific conductance, log-transformed streamflow, and a seasonal component were statistically significant explanatory variables for estimating sulfate in the Red River at Fargo and Grand Forks. Regression equations provided good relations between sulfate concentrations and the explanatory variables, with adjusted coefficients of determination of 0.94 and 0.89, respectively. For the Red River at Fargo and Grand Forks, specific conductance, streamflow, and a seasonal component were statistically significant explanatory variables for estimating chloride. For the Red River at Grand Forks, a time

  20. Enhance-Synergism and Suppression Effects in Multiple Regression

    ERIC Educational Resources Information Center

    Lipovetsky, Stan; Conklin, W. Michael

    2004-01-01

    Relations between pairwise correlations and the coefficient of multiple determination in regression analysis are considered. The conditions for the occurrence of enhance-synergism and suppression effects when multiple determination becomes bigger than the total of squared correlations of the dependent variable with the regressors are discussed. It…

  1. Teaching Students Not to Dismiss the Outermost Observations in Regressions

    ERIC Educational Resources Information Center

    Kasprowicz, Tomasz; Musumeci, Jim

    2015-01-01

    One econometric rule of thumb is that greater dispersion in observations of the independent variable improves estimates of regression coefficients and therefore produces better results, i.e., lower standard errors of the estimates. Nevertheless, students often seem to mistrust precisely the observations that contribute the most to this greater…

  2. General Regression and Representation Model for Classification

    PubMed Central

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2014-01-01

    Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms. PMID:25531882

  3. Spatial correlation in Bayesian logistic regression with misclassification.

    PubMed

    Bihrmann, Kristine; Toft, Nils; Nielsen, Søren Saxmose; Ersbøll, Annette Kjær

    2014-06-01

    Standard logistic regression assumes that the outcome is measured perfectly. In practice, this is often not the case, which could lead to biased estimates if not accounted for. This study presents Bayesian logistic regression with adjustment for misclassification of the outcome applied to data with spatial correlation. The models assessed include a fixed effects model, an independent random effects model, and models with spatially correlated random effects modelled using conditional autoregressive prior distributions (ICAR and ICAR(ρ)). Performance of these models was evaluated in a simulation study. Parameters were estimated by Markov Chain Monte Carlo methods, using slice sampling to improve convergence. The results demonstrated that adjustment for misclassification must be included to produce unbiased regression estimates. With strong correlation the ICAR model performed best. With weak or moderate correlation the ICAR(ρ) performed best. With unknown spatial correlation the recommended model would be the ICAR(ρ), assuming convergence can be obtained. PMID:24889989

  4. Controlling Type I Error Rates in Assessing DIF for Logistic Regression Method Combined with SIBTEST Regression Correction Procedure and DIF-Free-Then-DIF Strategy

    ERIC Educational Resources Information Center

    Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung

    2014-01-01

    The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…

  5. Can luteal regression be reversed?

    PubMed Central

    Telleria, Carlos M

    2006-01-01

    The corpus luteum is an endocrine gland whose limited lifespan is hormonally programmed. This debate article summarizes findings of our research group that challenge the principle that the end of function of the corpus luteum or luteal regression, once triggered, cannot be reversed. Overturning luteal regression by pharmacological manipulations may be of critical significance in designing strategies to improve fertility efficacy. PMID:17074090

  6. Logistic Regression: Concept and Application

    ERIC Educational Resources Information Center

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  7. Wild bootstrap for quantile regression.

    PubMed

    Feng, Xingdong; He, Xuming; Hu, Jianhua

    2011-12-01

    The existing theory of the wild bootstrap has focused on linear estimators. In this note, we broaden its validity by providing a class of weight distributions that is asymptotically valid for quantile regression estimators. As most weight distributions in the literature lead to biased variance estimates for nonlinear estimators of linear regression, we propose a modification of the wild bootstrap that admits a broader class of weight distributions for quantile regression. A simulation study on median regression is carried out to compare various bootstrap methods. With a simple finite-sample correction, the wild bootstrap is shown to account for general forms of heteroscedasticity in a regression model with fixed design points. PMID:23049133

  8. [Regression grading in gastrointestinal tumors].

    PubMed

    Tischoff, I; Tannapfel, A

    2012-02-01

    Preoperative neoadjuvant chemoradiation therapy is a well-established and essential part of the interdisciplinary treatment of gastrointestinal tumors. Neoadjuvant treatment leads to regressive changes in tumors. To evaluate the histological tumor response different scoring systems describing regressive changes are used and known as tumor regression grading. Tumor regression grading is usually based on the presence of residual vital tumor cells in proportion to the total tumor size. Currently, no nationally or internationally accepted grading systems exist. In general, common guidelines should be used in the pathohistological diagnostics of tumors after neoadjuvant therapy. In particularly, the standard tumor grading will be replaced by tumor regression grading. Furthermore, tumors after neoadjuvant treatment are marked with the prefix "y" in the TNM classification. PMID:22293790

  9. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record

  10. Regression equations to estimate seasonal flow duration, n-day high-flow frequency, and n-day low-flow frequency at sites in North Dakota using data through water year 2009

    USGS Publications Warehouse

    Williams-Sether, Tara; Gross, Tara A.

    2016-01-01

    Seasonal mean daily flow data from 119 U.S. Geological Survey streamflow-gaging stations in North Dakota; the surrounding states of Montana, Minnesota, and South Dakota; and the Canadian provinces of Manitoba and Saskatchewan with 10 or more years of unregulated flow record were used to develop regression equations for flow duration, n-day high flow and n-day low flow using ordinary least-squares and Tobit regression techniques. Regression equations were developed for seasonal flow durations at the 10th, 25th, 50th, 75th, and 90th percent exceedances; the 1-, 7-, and 30-day seasonal mean high flows for the 10-, 25-, and 50-year recurrence intervals; and the 1-, 7-, and 30-day seasonal mean low flows for the 2-, 5-, and 10-year recurrence intervals. Basin and climatic characteristics determined to be significant explanatory variables in one or more regression equations included drainage area, percentage of basin drainage area that drains to isolated lakes and ponds, ruggedness number, stream length, basin compactness ratio, minimum basin elevation, precipitation, slope ratio, stream slope, and soil permeability. The adjusted coefficient of determination for the n-day high-flow regression equations ranged from 55.87 to 94.53 percent. The Chi2 values for the duration regression equations ranged from 13.49 to 117.94, whereas the Chi2 values for the n-day low-flow regression equations ranged from 4.20 to 49.68.

  11. Regression equations to estimate seasonal flow duration, n-day high-flow frequency, and n-day low-flow frequency at sites in North Dakota using data through water year 2009

    USGS Publications Warehouse

    Williams-Sether, Tara; Gross, Tara A.

    2016-02-09

    Seasonal mean daily flow data from 119 U.S. Geological Survey streamflow-gaging stations in North Dakota; the surrounding states of Montana, Minnesota, and South Dakota; and the Canadian provinces of Manitoba and Saskatchewan with 10 or more years of unregulated flow record were used to develop regression equations for flow duration, n-day high flow and n-day low flow using ordinary least-squares and Tobit regression techniques. Regression equations were developed for seasonal flow durations at the 10th, 25th, 50th, 75th, and 90th percent exceedances; the 1-, 7-, and 30-day seasonal mean high flows for the 10-, 25-, and 50-year recurrence intervals; and the 1-, 7-, and 30-day seasonal mean low flows for the 2-, 5-, and 10-year recurrence intervals. Basin and climatic characteristics determined to be significant explanatory variables in one or more regression equations included drainage area, percentage of basin drainage area that drains to isolated lakes and ponds, ruggedness number, stream length, basin compactness ratio, minimum basin elevation, precipitation, slope ratio, stream slope, and soil permeability. The adjusted coefficient of determination for the n-day high-flow regression equations ranged from 55.87 to 94.53 percent. The Chi2 values for the duration regression equations ranged from 13.49 to 117.94, whereas the Chi2 values for the n-day low-flow regression equations ranged from 4.20 to 49.68.

  12. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  13. Estimation of octanol/water partition coefficients using LSER parameters

    USGS Publications Warehouse

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  14. Waste generated in high-rise buildings construction: a quantification model based on statistical multiple regression.

    PubMed

    Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana

    2015-05-01

    Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable.

  15. Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications

    PubMed Central

    Qian, Guoqi; Wu, Yuehua; Ferrari, Davide; Qiao, Puxue; Hollande, Frédéric

    2016-01-01

    Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method. PMID:27212939

  16. Determination of sedimentation coefficients for small peptides.

    PubMed Central

    Schuck, P; MacPhee, C E; Howlett, G J

    1998-01-01

    Direct fitting of sedimentation velocity data with numerical solutions of the Lamm equations has been exploited to obtain sedimentation coefficients for single solutes under conditions where solvent and solution plateaus are either not available or are transient. The calculated evolution was initialized with the first experimental scan and nonlinear regression was employed to obtain best-fit values for the sedimentation and diffusion coefficients. General properties of the Lamm equations as data analysis tools were examined. This method was applied to study a set of small peptides containing amphipathic heptad repeats with the general structure Ac-YS-(AKEAAKE)nGAR-NH2, n = 2, 3, or 4. Sedimentation velocity analysis indicated single sedimenting species with sedimentation coefficients (s(20,w) values) of 0.37, 0.45, and 0.52 S, respectively, in good agreement with sedimentation coefficients predicted by hydrodynamic theory. The described approach can be applied to synthetic boundary and conventional loading experiments, and can be extended to analyze sedimentation data for both large and small macromolecules in order to define shape, heterogeneity, and state of association. PMID:9449347

  17. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents

    SciTech Connect

    Weaver, Virginia M.; Vargas, Gonzalo García; Silbergeld, Ellen K.; Rothenberg, Stephen J.; Fadrowski, Jeffrey J.; Rubio-Andrade, Marisela; Parsons, Patrick J.; Steuerwald, Amy J.; and others

    2014-07-15

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m{sup 2}; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. - Highlights: • Positive associations between urine metals and creatinine-based eGFR are unexpected. • Optimal approach to urine concentration adjustment for urine biomarkers uncertain. • We compared urine concentration adjustment methods. • Positive associations observed only with urine creatinine adjustment. • Additional research using non-creatinine-based methods of adjustment needed.

  18. Transferability and generalizability of regression models of ultrafine particles in urban neighborhoods in the Boston area.

    PubMed

    Patton, Allison P; Zamore, Wig; Naumova, Elena N; Levy, Jonathan I; Brugge, Doug; Durant, John L

    2015-05-19

    Land use regression (LUR) models have been used to assess air pollutant exposure, but limited evidence exists on whether location-specific LUR models are applicable to other locations (transferability) or general models are applicable to smaller areas (generalizability). We tested transferability and generalizability of spatial-temporal LUR models of hourly particle number concentration (PNC) for Boston-area (MA, U.S.A.) urban neighborhoods near Interstate 93. Four neighborhood-specific regression models and one Boston-area model were developed from mobile monitoring measurements (34-46 days/neighborhood over one year each). Transferability was tested by applying each neighborhood-specific model to the other neighborhoods; generalizability was tested by applying the Boston-area model to each neighborhood. Both the transferability and generalizability of models were tested with and without neighborhood-specific calibration. Important PNC predictors (adjusted-R(2) = 0.24-0.43) included wind speed and direction, temperature, highway traffic volume, and distance from the highway edge. Direct model transferability was poor (R(2) < 0.17). Locally-calibrated transferred models (R(2) = 0.19-0.40) and the Boston-area model (adjusted-R(2) = 0.26, range: 0.13-0.30) performed similarly to neighborhood-specific models; however, some coefficients of locally calibrated transferred models were uninterpretable. Our results show that transferability of neighborhood-specific LUR models of hourly PNC was limited, but that a general model performed acceptably in multiple areas when calibrated with local data.

  19. Transferability and Generalizability of Regression Models of Ultrafine Particles in Urban Neighborhoods in the Boston Area

    PubMed Central

    2015-01-01

    Land use regression (LUR) models have been used to assess air pollutant exposure, but limited evidence exists on whether location-specific LUR models are applicable to other locations (transferability) or general models are applicable to smaller areas (generalizability). We tested transferability and generalizability of spatial-temporal LUR models of hourly particle number concentration (PNC) for Boston-area (MA, U.S.A.) urban neighborhoods near Interstate 93. Four neighborhood-specific regression models and one Boston-area model were developed from mobile monitoring measurements (34–46 days/neighborhood over one year each). Transferability was tested by applying each neighborhood-specific model to the other neighborhoods; generalizability was tested by applying the Boston-area model to each neighborhood. Both the transferability and generalizability of models were tested with and without neighborhood-specific calibration. Important PNC predictors (adjusted-R2 = 0.24–0.43) included wind speed and direction, temperature, highway traffic volume, and distance from the highway edge. Direct model transferability was poor (R2 < 0.17). Locally-calibrated transferred models (R2 = 0.19–0.40) and the Boston-area model (adjusted-R2 = 0.26, range: 0.13–0.30) performed similarly to neighborhood-specific models; however, some coefficients of locally calibrated transferred models were uninterpretable. Our results show that transferability of neighborhood-specific LUR models of hourly PNC was limited, but that a general model performed acceptably in multiple areas when calibrated with local data. PMID:25867675

  20. Regional regression of flood characteristics employing historical information

    USGS Publications Warehouse

    Tasker, Gary D.; Stedinger, J.R.

    1987-01-01

    Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.

  1. Multiple Regression and Its Discontents

    ERIC Educational Resources Information Center

    Snell, Joel C.; Marsh, Mitchell

    2012-01-01

    Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

  2. Regression methods for spatial data

    NASA Technical Reports Server (NTRS)

    Yakowitz, S. J.; Szidarovszky, F.

    1982-01-01

    The kriging approach, a parametric regression method used by hydrologists and mining engineers, among others also provides an error estimate the integral of the regression function. The kriging method is explored and some of its statistical characteristics are described. The Watson method and theory are extended so that the kriging features are displayed. Theoretical and computational comparisons of the kriging and Watson approaches are offered.

  3. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  4. Regression Discontinuity Designs in Epidemiology

    PubMed Central

    Moscoe, Ellen; Mutevedzi, Portia; Newell, Marie-Louise; Bärnighausen, Till

    2014-01-01

    When patients receive an intervention based on whether they score below or above some threshold value on a continuously measured random variable, the intervention will be randomly assigned for patients close to the threshold. The regression discontinuity design exploits this fact to estimate causal treatment effects. In spite of its recent proliferation in economics, the regression discontinuity design has not been widely adopted in epidemiology. We describe regression discontinuity, its implementation, and the assumptions required for causal inference. We show that regression discontinuity is generalizable to the survival and nonlinear models that are mainstays of epidemiologic analysis. We then present an application of regression discontinuity to the much-debated epidemiologic question of when to start HIV patients on antiretroviral therapy. Using data from a large South African cohort (2007–2011), we estimate the causal effect of early versus deferred treatment eligibility on mortality. Patients whose first CD4 count was just below the 200 cells/μL CD4 count threshold had a 35% lower hazard of death (hazard ratio = 0.65 [95% confidence interval = 0.45–0.94]) than patients presenting with CD4 counts just above the threshold. We close by discussing the strengths and limitations of regression discontinuity designs for epidemiology. PMID:25061922

  5. SLIT ADJUSTMENT CLAMP

    DOEpatents

    McKenzie, K.R.

    1959-07-01

    An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.

  6. Attachment style and adjustment to divorce.

    PubMed

    Yárnoz-Yaben, Sagrario

    2010-05-01

    Divorce is becoming increasingly widespread in Europe. In this study, I present an analysis of the role played by attachment style (secure, dismissing, preoccupied and fearful, plus the dimensions of anxiety and avoidance) in the adaptation to divorce. Participants comprised divorced parents (N = 40) from a medium-sized city in the Basque Country. The results reveal a lower proportion of people with secure attachment in the sample group of divorcees. Attachment style and dependence (emotional and instrumental) are closely related. I have also found associations between measures that showed a poor adjustment to divorce and the preoccupied and fearful attachment styles. Adjustment is related to a dismissing attachment style and to the avoidance dimension. Multiple regression analysis confirmed that secure attachment and the avoidance dimension predict adjustment to divorce and positive affectivity while preoccupied attachment and the anxiety dimension predicted negative affectivity. Implications for research and interventions with divorcees are discussed.

  7. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  8. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents.

    PubMed

    Weaver, Virginia M; Vargas, Gonzalo García; Silbergeld, Ellen K; Rothenberg, Stephen J; Fadrowski, Jeffrey J; Rubio-Andrade, Marisela; Parsons, Patrick J; Steuerwald, Amy J; Navas-Acien, Ana; Guallar, Eliseo

    2014-07-01

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m(2); 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary.

  9. Impact of urine concentration adjustment method on associations between urine metals and estimated glomerular filtration rates (eGFR) in adolescents☆

    PubMed Central

    Weaver, Virginia M.; Vargas, Gonzalo García; Silbergeld, Ellen K.; Rothenberg, Stephen J.; Fadrowski, Jeffrey J.; Rubio-Andrade, Marisela; Parsons, Patrick J.; Steuerwald, Amy J.; Navas-Acien, Ana; Guallar, Eliseo

    2014-01-01

    Positive associations between urine toxicant levels and measures of glomerular filtration rate (GFR) have been reported recently in a range of populations. The explanation for these associations, in a direction opposite that of traditional nephrotoxicity, is uncertain. Variation in associations by urine concentration adjustment approach has also been observed. Associations of urine cadmium, thallium and uranium in models of serum creatinine- and cystatin-C-based estimated GFR (eGFR) were examined using multiple linear regression in a cross-sectional study of adolescents residing near a lead smelter complex. Urine concentration adjustment approaches compared included urine creatinine, urine osmolality and no adjustment. Median age, blood lead and urine cadmium, thallium and uranium were 13.9 years, 4.0 μg/dL, 0.22, 0.27 and 0.04 g/g creatinine, respectively, in 512 adolescents. Urine cadmium and thallium were positively associated with serum creatinine-based eGFR only when urine creatinine was used to adjust for urine concentration (β coefficient=3.1 mL/min/1.73 m2; 95% confidence interval=1.4, 4.8 per each doubling of urine cadmium). Weaker positive associations, also only with urine creatinine adjustment, were observed between these metals and serum cystatin-C-based eGFR and between urine uranium and serum creatinine-based eGFR. Additional research using non-creatinine-based methods of adjustment for urine concentration is necessary. PMID:24815335

  10. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  11. Incremental learning for ν-Support Vector Regression.

    PubMed

    Gu, Bin; Sheng, Victor S; Wang, Zhijie; Ho, Derek; Osman, Said; Li, Shuo

    2015-07-01

    The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts.

  12. Remotely Adjustable Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Kouns, H. H.; Gardner, L. D.

    1987-01-01

    Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.

  13. Quantitative Adjustment of the Influence of Depression on the Disabilities of the Arm, Shoulder, and Hand (DASH) Questionnaire

    PubMed Central

    Calderón, Santiago A. Lozano; Zurakowski, David; Davis, James S.

    2009-01-01

    Upper extremity specific disability as measured with the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire varies more than expected based upon variations in objective impairment influenced by depression. We tested the hypothesis that adjusting for depression can reduce the mean and variance of DASH scores. Five hundred and sixteen patients (352 men, 164 women) with an average of 58 years of age (range, 18–100) were asked to simultaneously complete the DASH and Center for Epidemiologic Studies Depression Scale (CES-D) scores at their initial visit to a hand surgeon. Pearson's correlations between each of the DASH items and the CES-D score were obtained. The DASH score was then adjusted for the influence of Depression for women and men using ordinary least-squares regression and subtracting the product of the regression coefficient and the CES-D score from the raw DASH score. The average DASH score was 24 points (SD, 19; range, 0–91), and the average CES-D score was 10 points (SD, 8; range, 0–42). Thirteen of the 30 items of the DASH demonstrated correlation greater than r = 0.20. Adjustment of these DASH items for the depression effect led to significant reductions in the mean (5.5 points; p < 0.01) and standard deviation (0.8 points; p < 0.01) of DASH scores. Adjustment for depression alone had a significant but perhaps clinically marginal effect on the variance of DASH scores. Additional research is merited to determine if DASH score adjustments for the most important subjective and psychosocial aspects of illness behavior can improve correlation between DASH scores and objective impairment. PMID:19495887

  14. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  15. Comparison of regression models for estimation of isometric wrist joint torques using surface electromyography

    PubMed Central

    2011-01-01

    Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG) signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2) values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS) was shown to have high isometric torque estimation accuracy combined with very short training times. PMID:21943179

  16. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  17. Regression Analysis Of Zernike Polynomials Part II

    NASA Astrophysics Data System (ADS)

    Grey, Louis D.

    1989-01-01

    In an earlier paper entitled "Regression Analysis of Zernike Polynomials, Proceedings of SPIE, Vol. 18, pp. 392-398, the least squares fitting process of Zernike polynomials was examined from the point of view of linear statistical regression theory. Among the topics discussed were measures for determining how good the fit was, tests for the underlying assumptions of normality and constant variance, the treatment of outliers, the analysis of residuals and the computation of confidence intervals for the coefficients. The present paper is a continuation of the earlier paper and concerns applications of relatively new advances in certain areas of statistical theory made possible by the advent of the high speed computer. Among these are: 1. Jackknife - A technique for improving the accuracy of any statistical estimate. 2. Bootstrap - Increasing the accuracy of an estimate by generating new samples of data from some given set. 3. Cross-validation - The division of a data set into two halves, the first half of which is used to fit the model and the second half to see how well the fitted model predicts the data. The exposition is mainly by examples.

  18. Optical phantoms with adjustable subdiffusive scattering parameters.

    PubMed

    Krauter, Philipp; Nothelfer, Steffen; Bodenschatz, Nico; Simon, Emanuel; Stocker, Sabrina; Foschum, Florian; Kienle, Alwin

    2015-10-01

    A new epoxy-resin-based optical phantom system with adjustable subdiffusive scattering parameters is presented along with measurements of the intrinsic absorption, scattering, fluorescence, and refractive index of the matrix material. Both an aluminium oxide powder and a titanium dioxide dispersion were used as scattering agents and we present measurements of their scattering and reduced scattering coefficients. A method is theoretically described for a mixture of both scattering agents to obtain continuously adjustable anisotropy values g between 0.65 and 0.9 and values of the phase function parameter γ in the range of 1.4 to 2.2. Furthermore, we show absorption spectra for a set of pigments that can be added to achieve particular absorption characteristics. By additional analysis of the aging, a fully characterized phantom system is obtained with the novelty of g and γ parameter adjustment. PMID:26473589

  19. Establishment of In Silico Prediction Models for CYP3A4 and CYP2B6 Induction in Human Hepatocytes by Multiple Regression Analysis Using Azole Compounds.

    PubMed

    Nagai, Mika; Konno, Yoshihiro; Satsukawa, Masahiro; Yamashita, Shinji; Yoshinari, Kouichi

    2016-08-01

    Drug-drug interactions (DDIs) via cytochrome P450 (P450) induction are one clinical problem leading to increased risk of adverse effects and the need for dosage adjustments and additional therapeutic monitoring. In silico models for predicting P450 induction are useful for avoiding DDI risk. In this study, we have established regression models for CYP3A4 and CYP2B6 induction in human hepatocytes using several physicochemical parameters for a set of azole compounds with different P450 induction as characteristics as model compounds. To obtain a well-correlated regression model, the compounds for CYP3A4 or CYP2B6 induction were independently selected from the tested azole compounds using principal component analysis with fold-induction data. Both of the multiple linear regression models obtained for CYP3A4 and CYP2B6 induction are represented by different sets of physicochemical parameters. The adjusted coefficients of determination for these models were of 0.8 and 0.9, respectively. The fold-induction of the validation compounds, another set of 12 azole-containing compounds, were predicted within twofold limits for both CYP3A4 and CYP2B6. The concordance for the prediction of CYP3A4 induction was 87% with another validation set, 23 marketed drugs. However, the prediction of CYP2B6 induction tended to be overestimated for these marketed drugs. The regression models show that lipophilicity mostly contributes to CYP3A4 induction, whereas not only the lipophilicity but also the molecular polarity is important for CYP2B6 induction. Our regression models, especially that for CYP3A4 induction, might provide useful methods to avoid potent CYP3A4 or CYP2B6 inducers during the lead optimization stage without performing induction assays in human hepatocytes.

  20. Adjustments to the correction for attenuation.

    PubMed

    Wetcher-Hendricks, Debra

    2006-06-01

    With respect to the often-present covariance between error terms of correlated variables, D. W. Zimmerman and R. H. Williams's (1977) adjusted correction for attenuation estimates the strength of the pairwise correlation between true scores without assuming independence of error scores. This article focuses on the derivation and analysis of formulas that perform the same function for partial and part correlation coefficients. Values produced by these formulas lie closer to the actual true-score coefficient than do the observed-score coefficients or those obtained by using C. Spearman's (1904) correction for attenuation. The new versions of the formulas thus allow analysts to use hypothetical values for error-score correlations to estimate values for the partial and part correlations between true scores while disregarding the independence-of-errors assumption.

  1. A Heterogeneous Bayesian Regression Model for Cross-Sectional Data Involving a Single Observation per Response Unit

    ERIC Educational Resources Information Center

    Fong, Duncan K. H.; Ebbes, Peter; DeSarbo, Wayne S.

    2012-01-01

    Multiple regression is frequently used across the various social sciences to analyze cross-sectional data. However, it can often times be challenging to justify the assumption of common regression coefficients across all respondents. This manuscript presents a heterogeneous Bayesian regression model that enables the estimation of…

  2. A modified GM-estimation for robust fitting of mixture regression models

    NASA Astrophysics Data System (ADS)

    Booppasiri, Slun; Srisodaphol, Wuttichai

    2015-02-01

    In the mixture regression models, the regression parameters are estimated by maximum likelihood estimation (MLE) via EM algorithm. Generally, maximum likelihood estimation is sensitive to outliers and heavy tailed error distribution. The robust method, M-estimation can handle outliers existing on dependent variable only for estimating regression coefficients in regression models. Moreover, GM-estimation can handle outliers existing on dependent variable and independent variables. In this study, the modified GM-estimations for estimating the regression coefficients in the mixture regression models are proposed. A Monte Carlo simulation is used to evaluate the efficiency of the proposed methods. The results show that the proposed modified GM-estimations approximate to MLE when there are no outliers and the error is normally distributed. Furthermore, our proposed methods are more efficient than the MLE, when there are leverage points.

  3. Regressive Evolution in Astyanax Cavefish

    PubMed Central

    Jeffery, William R.

    2013-01-01

    A diverse group of animals, including members of most major phyla, have adapted to life in the perpetual darkness of caves. These animals are united by the convergence of two regressive phenotypes, loss of eyes and pigmentation. The mechanisms of regressive evolution are poorly understood. The teleost Astyanax mexicanus is of special significance in studies of regressive evolution in cave animals. This species includes an ancestral surface dwelling form and many con-specific cave-dwelling forms, some of which have evolved their recessive phenotypes independently. Recent advances in Astyanax development and genetics have provided new information about how eyes and pigment are lost during cavefish evolution; namely, they have revealed some of the molecular and cellular mechanisms involved in trait modification, the number and identity of the underlying genes and mutations, the molecular basis of parallel evolution, and the evolutionary forces driving adaptation to the cave environment. PMID:19640230

  4. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  5. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  6. A New Test of Linear Hypotheses in OLS Regression under Heteroscedasticity of Unknown Form

    ERIC Educational Resources Information Center

    Cai, Li; Hayes, Andrew F.

    2008-01-01

    When the errors in an ordinary least squares (OLS) regression model are heteroscedastic, hypothesis tests involving the regression coefficients can have Type I error rates that are far from the nominal significance level. Asymptotically, this problem can be rectified with the use of a heteroscedasticity-consistent covariance matrix (HCCM)…

  7. Analysis of Differential Item Functioning (DIF) Using Hierarchical Logistic Regression Models.

    ERIC Educational Resources Information Center

    Swanson, David B.; Clauser, Brian E.; Case, Susan M.; Nungester, Ronald J.; Featherman, Carol

    2002-01-01

    Outlines an approach to differential item functioning (DIF) analysis using hierarchical linear regression that makes it possible to combine results of logistic regression analyses across items to identify consistent sources of DIF, to quantify the proportion of explained variation in DIF coefficients, and to compare the predictive accuracy of…

  8. Confidence Intervals for an Effect Size Measure in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2007-01-01

    The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…

  9. Beyond Multiple Regression: Using Commonality Analysis to Better Understand R[superscript 2] Results

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2011-01-01

    Multiple regression is one of the most common statistical methods used in quantitative educational research. Despite the versatility and easy interpretability of multiple regression, it has some shortcomings in the detection of suppressor variables and for somewhat arbitrarily assigning values to the structure coefficients of correlated…

  10. [Is regression of atherosclerosis possible?].

    PubMed

    Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F

    1992-10-01

    Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque

  11. Advanced colorectal neoplasia risk stratification by penalized logistic regression.

    PubMed

    Lin, Yunzhi; Yu, Menggang; Wang, Sijian; Chappell, Richard; Imperiale, Thomas F

    2016-08-01

    Colorectal cancer is the second leading cause of death from cancer in the United States. To facilitate the efficiency of colorectal cancer screening, there is a need to stratify risk for colorectal cancer among the 90% of US residents who are considered "average risk." In this article, we investigate such risk stratification rules for advanced colorectal neoplasia (colorectal cancer and advanced, precancerous polyps). We use a recently completed large cohort study of subjects who underwent a first screening colonoscopy. Logistic regression models have been used in the literature to estimate the risk of advanced colorectal neoplasia based on quantifiable risk factors. However, logistic regression may be prone to overfitting and instability in variable selection. Since most of the risk factors in our study have several categories, it was tempting to collapse these categories into fewer risk groups. We propose a penalized logistic regression method that automatically and simultaneously selects variables, groups categories, and estimates their coefficients by penalizing the [Formula: see text]-norm of both the coefficients and their differences. Hence, it encourages sparsity in the categories, i.e. grouping of the categories, and sparsity in the variables, i.e. variable selection. We apply the penalized logistic regression method to our data. The important variables are selected, with close categories simultaneously grouped, by penalized regression models with and without the interactions terms. The models are validated with 10-fold cross-validation. The receiver operating characteristic curves of the penalized regression models dominate the receiver operating characteristic curve of naive logistic regressions, indicating a superior discriminative performance.

  12. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    PubMed

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Prob<0.0001. Regression coefficients represent the additional per patient costs summed to the base payment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost

  13. Estimating R-squared Shrinkage in Multiple Regression: A Comparison of Different Analytical Methods.

    ERIC Educational Resources Information Center

    Yin, Ping; Fan, Xitao

    2001-01-01

    Studied the effectiveness of various analytical formulas for estimating "R" squared shrinkage in multiple regression analysis, focusing on estimators of the squared population multiple correlation coefficient and the squared population cross validity coefficient. Simulation results suggest that the most widely used Wherry (R. Wherry, 1931) formula…

  14. Spatial regression analysis on 32 years of total column ozone data

    NASA Astrophysics Data System (ADS)

    Knibbe, J. S.; van der A, R. J.; de Laat, A. T. J.

    2014-08-01

    Multiple-regression analyses have been performed on 32 years of total ozone column data that was spatially gridded with a 1 × 1.5° resolution. The total ozone data consist of the MSR (Multi Sensor Reanalysis; 1979-2008) and 2 years of assimilated SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) ozone data (2009-2010). The two-dimensionality in this data set allows us to perform the regressions locally and investigate spatial patterns of regression coefficients and their explanatory power. Seasonal dependencies of ozone on regressors are included in the analysis. A new physically oriented model is developed to parameterize stratospheric ozone. Ozone variations on nonseasonal timescales are parameterized by explanatory variables describing the solar cycle, stratospheric aerosols, the quasi-biennial oscillation (QBO), El Niño-Southern Oscillation (ENSO) and stratospheric alternative halogens which are parameterized by the effective equivalent stratospheric chlorine (EESC). For several explanatory variables, seasonally adjusted versions of these explanatory variables are constructed to account for the difference in their effect on ozone throughout the year. To account for seasonal variation in ozone, explanatory variables describing the polar vortex, geopotential height, potential vorticity and average day length are included. Results of this regression model are compared to that of a similar analysis based on a more commonly applied statistically oriented model. The physically oriented model provides spatial patterns in the regression results for each explanatory variable. The EESC has a significant depleting effect on ozone at mid- and high latitudes, the solar cycle affects ozone positively mostly in the Southern Hemisphere, stratospheric aerosols affect ozone negatively at high northern latitudes, the effect of QBO is positive and negative in the tropics and mid- to high latitudes, respectively, and ENSO affects ozone negatively

  15. Urinary arsenic concentration adjustment factors and malnutrition.

    PubMed

    Nermell, Barbro; Lindberg, Anna-Lena; Rahman, Mahfuzar; Berglund, Marika; Persson, Lars Ake; El Arifeen, Shams; Vahter, Marie

    2008-02-01

    This study aims at evaluating the suitability of adjusting urinary concentrations of arsenic, or any other urinary biomarker, for variations in urine dilution by creatinine and specific gravity in a malnourished population. We measured the concentrations of metabolites of inorganic arsenic, creatinine and specific gravity in spot urine samples collected from 1466 individuals, 5-88 years of age, in Matlab, rural Bangladesh, where arsenic-contaminated drinking water and malnutrition are prevalent (about 30% of the adults had body mass index (BMI) below 18.5 kg/m(2)). The urinary concentrations of creatinine were low; on average 0.55 g/L in the adolescents and adults and about 0.35 g/L in the 5-12 years old children. Therefore, adjustment by creatinine gave much higher numerical values for the urinary arsenic concentrations than did the corresponding data expressed as microg/L, adjusted by specific gravity. As evaluated by multiple regression analyses, urinary creatinine, adjusted by specific gravity, was more affected by body size, age, gender and season than was specific gravity. Furthermore, urinary creatinine was found to be significantly associated with urinary arsenic, which further disqualifies the creatinine adjustment. PMID:17900556

  16. Regression, least squares, and the general version of inclusive fitness.

    PubMed

    Rousset, François

    2015-11-01

    A general version of inclusive fitness based on regression is rederived with minimal mathematics and directly from the verbal interpretation of its terms that motivated the original formulation of the inclusive fitness concept. This verbal interpretation is here extended to provide the two relationships required to determine the two coefficients -c and b. These coefficients retain their definition as expected effects on the fitness of an individual, respectively of a change in allelic state of this individual, and of correlated change in allelic state of social partners. The known least-squares formulation of the relationships determining b and c can be immediately deduced and shown to be equivalent to this new formulation. These results make clear that criticisms of the mathematical tools (in particular least-squares regression) previously used to derive this version of inclusive fitness are misdirected.

  17. Correlation Weights in Multiple Regression

    ERIC Educational Resources Information Center

    Waller, Niels G.; Jones, Jeff A.

    2010-01-01

    A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…

  18. Weighting Regressions by Propensity Scores

    ERIC Educational Resources Information Center

    Freedman, David A.; Berk, Richard A.

    2008-01-01

    Regressions can be weighted by propensity scores in order to reduce bias. However, weighting is likely to increase random error in the estimates, and to bias the estimated standard errors downward, even when selection mechanisms are well understood. Moreover, in some cases, weighting will increase the bias in estimated causal parameters. If…

  19. Multiple Regression: A Leisurely Primer.

    ERIC Educational Resources Information Center

    Daniel, Larry G.; Onwuegbuzie, Anthony J.

    Multiple regression is a useful statistical technique when the researcher is considering situations in which variables of interest are theorized to be multiply caused. It may also be useful in those situations in which the researchers is interested in studies of predictability of phenomena of interest. This paper provides an introduction to…

  20. Cactus: An Introduction to Regression

    ERIC Educational Resources Information Center

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  1. Ridge Regression for Interactive Models.

    ERIC Educational Resources Information Center

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…

  2. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  3. Logistic regression: a brief primer.

    PubMed

    Stoltzfus, Jill C

    2011-10-01

    Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model

  4. Reference Material for Seebeck Coefficients

    NASA Astrophysics Data System (ADS)

    Edler, F.; Lenz, E.; Haupt, S.

    2015-03-01

    This paper describes a measurement method and a measuring system to determine absolute Seebeck coefficients of thermoelectric bulk materials with the aim of establishing reference materials for Seebeck coefficients. Reference materials with known thermoelectric properties are essential to allow a reliable benchmarking of different thermoelectric materials for application in thermoelectric generators to convert thermal into electrical energy or vice versa. A temperature gradient (1 to 8) K is induced across the sample, and the resulting voltage is measured by using two differential Au/Pt thermocouples. On the basis of the known absolute Seebeck coefficients of Au and Pt, the unknown Seebeck coefficient of the sample is calculated. The measurements are performed in inert atmospheres and at low pressure (30 to 60) mbar in the temperature range between 300 K and 860 K. The measurement results of the Seebeck coefficients of metallic and semiconducting samples are presented. Achievable relative measurement uncertainties of the Seebeck coefficient are on the order of a few percent.

  5. Coefficient Alpha: A Reliability Coefficient for the 21st Century?

    ERIC Educational Resources Information Center

    Yang, Yanyun; Green, Samuel B.

    2011-01-01

    Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…

  6. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  7. Analysis of multilevel grouped survival data with time-varying regression coefficients.

    PubMed

    Wong, May C M; Lam, K F; Lo, Edward C M

    2011-02-10

    Correlated or multilevel grouped survival data are common in medical and dental research. Two common approaches to analyze such data are the marginal and the random-effects approaches. Models and methods in the literature generally assume that the treatment effect is constant over time. A researcher may be interested in studying whether the treatment effects in a clinical trial vary over time, say fade out gradually. This is of particular clinical value when studying the long-term effect of a treatment. This paper proposed to extend the random effects grouped proportional hazards models by incorporating the possibly time-varying covariate effects into the model in terms of a state-space formulation. The proposed model is very flexible and the estimation can be performed using the MCMC approach with non-informative priors in the Bayesian framework. The method is applied to a data set from a prospective clinical trial investigating the effectiveness of silver diamine fluoride (SDF) and sodium fluoride (NaF) varnish in arresting active dentin caries in the Chinese preschool children. It is shown that the treatment groups with caries removal prior to the topical fluoride applications are most effective in shortening the arrest times in the first 6-month interval, but their effects fade out rapidly since then. The effects of treatment groups without caries removal prior to topical fluoride application drop at a very slow rate and can be considered as more or less constant over time. The applications of SDF solution is found to be more effective than the applications of NaF vanish.

  8. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  9. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program

  10. Simple, Internally Adjustable Valve

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1990-01-01

    Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.

  11. Self Adjusting Sunglasses

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.

  12. Rural to Urban Adjustment

    ERIC Educational Resources Information Center

    Abramson, Jane A.

    Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…

  13. Self adjusting inclinometer

    DOEpatents

    Hunter, Steven L.

    2002-01-01

    An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.

  14. Joint regression analysis and AMMI model applied to oat improvement

    NASA Astrophysics Data System (ADS)

    Oliveira, A.; Oliveira, T. A.; Mejza, S.

    2012-09-01

    In our work we present an application of some biometrical methods useful in genotype stability evaluation, namely AMMI model, Joint Regression Analysis (JRA) and multiple comparison tests. A genotype stability analysis of oat (Avena Sativa L.) grain yield was carried out using data of the Portuguese Plant Breeding Board, sample of the 22 different genotypes during the years 2002, 2003 and 2004 in six locations. In Ferreira et al. (2006) the authors state the relevance of the regression models and of the Additive Main Effects and Multiplicative Interactions (AMMI) model, to study and to estimate phenotypic stability effects. As computational techniques we use the Zigzag algorithm to estimate the regression coefficients and the agricolae-package available in R software for AMMI model analysis.

  15. Regression for skewed biomarker outcomes subject to pooling.

    PubMed

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Danaher, Michelle; Perkins, Neil J; Schisterman, Enrique F

    2014-03-01

    Epidemiological studies involving biomarkers are often hindered by prohibitively expensive laboratory tests. Strategically pooling specimens prior to performing these lab assays has been shown to effectively reduce cost with minimal information loss in a logistic regression setting. When the goal is to perform regression with a continuous biomarker as the outcome, regression analysis of pooled specimens may not be straightforward, particularly if the outcome is right-skewed. In such cases, we demonstrate that a slight modification of a standard multiple linear regression model for poolwise data can provide valid and precise coefficient estimates when pools are formed by combining biospecimens from subjects with identical covariate values. When these x-homogeneous pools cannot be formed, we propose a Monte Carlo expectation maximization (MCEM) algorithm to compute maximum likelihood estimates (MLEs). Simulation studies demonstrate that these analytical methods provide essentially unbiased estimates of coefficient parameters as well as their standard errors when appropriate assumptions are met. Furthermore, we show how one can utilize the fully observed covariate data to inform the pooling strategy, yielding a high level of statistical efficiency at a fraction of the total lab cost. PMID:24521420

  16. Tools to support interpreting multiple regression in the face of multicollinearity.

    PubMed

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.

  17. EMPIRICAL LIKELIHOOD INFERENCE FOR THE COX MODEL WITH TIME-DEPENDENT COEFFICIENTS VIA LOCAL PARTIAL LIKELIHOOD

    PubMed Central

    Sun, Yanqing; Sundaram, Rajeshwari; Zhao, Yichuan

    2009-01-01

    The Cox model with time-dependent coefficients has been studied by a number of authors recently. In this paper, we develop empirical likelihood (EL) pointwise confidence regions for the time-dependent regression coefficients via local partial likelihood smoothing. The EL simultaneous confidence bands for a linear combination of the coefficients are also derived based on the strong approximation methods. The empirical likelihood ratio is formulated through the local partial log-likelihood for the regression coefficient functions. Our numerical studies indicate that the EL pointwise/simultaneous confidence regions/bands have satisfactory finite sample performances. Compared with the confidence regions derived directly based on the asymptotic normal distribution of the local constant estimator, the EL confidence regions are overall tighter and can better capture the curvature of the underlying regression coefficient functions. Two data sets, the gastric cancer data and the Mayo Clinic primary biliary cirrhosis data, are analyzed using the proposed method. PMID:19838322

  18. 3D Regression Heat Map Analysis of Population Study Data.

    PubMed

    Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard

    2016-01-01

    Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease. PMID:26529689

  19. 3D Regression Heat Map Analysis of Population Study Data.

    PubMed

    Klemm, Paul; Lawonn, Kai; Glaßer, Sylvia; Niemann, Uli; Hegenscheid, Katrin; Völzke, Henry; Preim, Bernhard

    2016-01-01

    Epidemiological studies comprise heterogeneous data about a subject group to define disease-specific risk factors. These data contain information (features) about a subject's lifestyle, medical status as well as medical image data. Statistical regression analysis is used to evaluate these features and to identify feature combinations indicating a disease (the target feature). We propose an analysis approach of epidemiological data sets by incorporating all features in an exhaustive regression-based analysis. This approach combines all independent features w.r.t. a target feature. It provides a visualization that reveals insights into the data by highlighting relationships. The 3D Regression Heat Map, a novel 3D visual encoding, acts as an overview of the whole data set. It shows all combinations of two to three independent features with a specific target disease. Slicing through the 3D Regression Heat Map allows for the detailed analysis of the underlying relationships. Expert knowledge about disease-specific hypotheses can be included into the analysis by adjusting the regression model formulas. Furthermore, the influences of features can be assessed using a difference view comparing different calculation results. We applied our 3D Regression Heat Map method to a hepatic steatosis data set to reproduce results from a data mining-driven analysis. A qualitative analysis was conducted on a breast density data set. We were able to derive new hypotheses about relations between breast density and breast lesions with breast cancer. With the 3D Regression Heat Map, we present a visual overview of epidemiological data that allows for the first time an interactive regression-based analysis of large feature sets with respect to a disease.

  20. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  1. Precision and Recall for Regression

    NASA Astrophysics Data System (ADS)

    Torgo, Luis; Ribeiro, Rita

    Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications.

  2. Interquantile Shrinkage and Variable Selection in Quantile Regression

    PubMed Central

    Jiang, Liewen; Bondell, Howard D.; Wang, Huixia Judy

    2014-01-01

    Examination of multiple conditional quantile functions provides a comprehensive view of the relationship between the response and covariates. In situations where quantile slope coefficients share some common features, estimation efficiency and model interpretability can be improved by utilizing such commonality across quantiles. Furthermore, elimination of irrelevant predictors will also aid in estimation and interpretation. These motivations lead to the development of two penalization methods, which can identify the interquantile commonality and nonzero quantile coefficients simultaneously. The developed methods are based on a fused penalty that encourages sparsity of both quantile coefficients and interquantile slope differences. The oracle properties of the proposed penalization methods are established. Through numerical investigations, it is demonstrated that the proposed methods lead to simpler model structure and higher estimation efficiency than the traditional quantile regression estimation. PMID:24653545

  3. An Empirical Adjustment of the Kuder-Richardson 21 Reliability Coefficient to Better Estimate the Kuder-Richardson 20 Coefficient.

    ERIC Educational Resources Information Center

    Wilson, Pamela W.; And Others

    The purpose of this study was to present an empirical correction of the KR21 (Kuder Richardson test reliability) formula that not only yields a closer approximation to the numerical value of the KR20 without overestimation, but also simplifies computation. This correction was accomplished by introducing several correction factors to the numerator…

  4. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  5. Coefficient of glucose variation is independently associated with mortality in critically ill patients receiving intravenous insulin

    PubMed Central

    2014-01-01

    Introduction Both patient- and context-specific factors may explain the conflicting evidence regarding glucose control in critically ill patients. Blood glucose variability appears to correlate with mortality, but this variability may be an indicator of disease severity, rather than an independent predictor of mortality. We assessed blood glucose coefficient of variation as an independent predictor of mortality in the critically ill. Methods We used eProtocol-Insulin, an electronic protocol for managing intravenous insulin with explicit rules, high clinician compliance, and reproducibility. We studied critically ill patients from eight hospitals, excluding patients with diabetic ketoacidosis and patients supported with eProtocol-insulin for < 24 hours or with < 10 glucose measurements. Our primary clinical outcome was 30-day all-cause mortality. We performed multivariable logistic regression, with covariates of age, gender, glucose coefficient of variation (standard deviation/mean), Charlson comorbidity score, acute physiology score, presence of diabetes, and occurrence of hypoglycemia < 60 mg/dL. Results We studied 6101 critically ill adults. Coefficient of variation was independently associated with 30-day mortality (odds ratio 1.23 for every 10% increase, P < 0.001), even after adjustment for hypoglycemia, age, disease severity, and comorbidities. The association was higher in non-diabetics (OR = 1.37, P < 0.001) than in diabetics (OR 1.15, P = 0.001). Conclusions Blood glucose variability is associated with mortality and is independent of hypoglycemia, disease severity, and comorbidities. Future studies should evaluate blood glucose variability. PMID:24886864

  6. Deep Human Parsing with Active Template Regression.

    PubMed

    Liang, Xiaodan; Liu, Si; Shen, Xiaohui; Yang, Jianchao; Liu, Luoqi; Dong, Jian; Lin, Liang; Yan, Shuicheng

    2015-12-01

    In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28]. PMID:26539846

  7. Deep Human Parsing with Active Template Regression.

    PubMed

    Liang, Xiaodan; Liu, Si; Shen, Xiaohui; Yang, Jianchao; Liu, Luoqi; Dong, Jian; Lin, Liang; Yan, Shuicheng

    2015-12-01

    In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].

  8. Precision adjustable stage

    DOEpatents

    Cutburth, Ronald W.; Silva, Leonard L.

    1988-01-01

    An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.

  9. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1982-09-14

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  10. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1980-01-15

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  11. Adjustable vane windmills

    SciTech Connect

    Ducker, W.L.

    1982-09-07

    A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.

  12. Adjustable Autonomy Testbed

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  13. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2010-01-01

    The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

  14. Psychosocial adjustment to ALS: a longitudinal study

    PubMed Central

    Matuz, Tamara; Birbaumer, Niels; Hautzinger, Martin; Kübler, Andrea

    2015-01-01

    For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS. PMID:26441696

  15. Quality Reporting of Multivariable Regression Models in Observational Studies

    PubMed Central

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M.

    2016-01-01

    Abstract Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE. Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model. The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0–30.3) of the articles and 18.5% (95% CI: 14.8–22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor. A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature. PMID:27196467

  16. Adolescent suicide attempts and adult adjustment

    PubMed Central

    Brière, Frédéric N.; Rohde, Paul; Seeley, John R.; Klein, Daniel; Lewinsohn, Peter M.

    2014-01-01

    Background Adolescent suicide attempts are disproportionally prevalent and frequently of low severity, raising questions regarding their long-term prognostic implications. In this study, we examined whether adolescent attempts were associated with impairments related to suicidality, psychopathology, and psychosocial functioning in adulthood (objective 1) and whether these impairments were better accounted for by concurrent adolescent confounders (objective 2). Method 816 adolescents were assessed using interviews and questionnaires at four time points from adolescence to adulthood. We examined whether lifetime suicide attempts in adolescence (by T2, mean age 17) predicted adult outcomes (by T4, mean age 30) using linear and logistic regressions in unadjusted models (objective 1) and adjusting for sociodemographic background, adolescent psychopathology, and family risk factors (objective 2). Results In unadjusted analyses, adolescent suicide attempts predicted poorer adjustment on all outcomes, except those related to social role status. After adjustment, adolescent attempts remained predictive of axis I and II psychopathology (anxiety disorder, antisocial and borderline personality disorder symptoms), global and social adjustment, risky sex, and psychiatric treatment utilization. However, adolescent attempts no longer predicted most adult outcomes, notably suicide attempts and major depressive disorder. Secondary analyses indicated that associations did not differ by sex and attempt characteristics (intent, lethality, recurrence). Conclusions Adolescent suicide attempters are at high risk of protracted and wide-ranging impairments, regardless of the characteristics of their attempt. Although attempts specifically predict (and possibly influence) several outcomes, results suggest that most impairments reflect the confounding contributions of other individual and family problems or vulnerabilites in adolescent attempters. PMID:25421360

  17. Cytoplasmic hydrogen ion diffusion coefficient.

    PubMed Central

    al-Baldawi, N F; Abercrombie, R F

    1992-01-01

    The apparent cytoplasmic proton diffusion coefficient was measured using pH electrodes and samples of cytoplasm extracted from the giant neuron of a marine invertebrate. By suddenly changing the pH at one surface of the sample and recording the relaxation of pH within the sample, an apparent diffusion coefficient of 1.4 +/- 0.5 x 10(-6) cm2/s (N = 7) was measured in the acidic or neutral range of pH (6.0-7.2). This value is approximately 5x lower than the diffusion coefficient of the mobile pH buffers (approximately 8 x 10(-6) cm2/s) and approximately 68x lower than the diffusion coefficient of the hydronium ion (93 x 10(-6) cm2/s). A mobile pH buffer (approximately 15% of the buffering power) and an immobile buffer (approximately 85% of the buffering power) could quantitatively account for the results at acidic or neutral pH. At alkaline pH (8.2-8.6), the apparent proton diffusion coefficient increased to 4.1 +/- 0.8 x 10(-6) cm2/s (N = 7). This larger diffusion coefficient at alkaline pH could be explained quantitatively by the enhanced buffering power of the mobile amino acids. Under the conditions of these experiments, it is unlikely that hydroxide movement influences the apparent hydrogen ion diffusion coefficient. PMID:1617134

  18. Note on a Confidence Interval for the Squared Semipartial Correlation Coefficient

    ERIC Educational Resources Information Center

    Algina, James; Keselman, Harvey J.; Penfield, Randall J.

    2008-01-01

    A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…

  19. The application of quantile regression in autumn precipitation forecasting over Southeastern China

    NASA Astrophysics Data System (ADS)

    Wu, Baoqiang; Yuan, Huiling

    2014-05-01

    This study applies the quantile regression method to seasonal forecasts of autumn precipitation over Southeastern China. The dataset includes daily precipitation of 195 gauge stations over Southeastern China, and monthly means of circulation indices, global Sea Surface Temperature (SST), and 500hPa geopotential height. First, using the data from 1961 to 2000 for training, the predictors are chosen by stepwise regression and the prognostic equations of autumn total precipitation are created for each station using the traditional linear regression method. Similarly, the 0.5 quantile regression (median regression) is used to generate the prognostic equations for individual stations. Afterwards, using the data from 2001 to 2007 for validation, the autumn precipitation is forecasted using quantile regression and traditional linear regression respectively. Compared to traditional linear regression, the median regression has better forecast skills in terms of anomaly correlation coefficients, especially in the regions of north Guangxi Province and west Hunan Province. Furthermore, for each station, quantile regression can also estimate a confidence interval of autumn total precipitation using multiple quantiles, providing the range of uncertainties for predicting extreme seasonal precipitation. Keywords: quantile regression, precipitation, linear regression, seasonal forecasts

  20. Estimating forest crown area removed by selection cutting: a linked regression-GIS approach based on stump diameters

    USGS Publications Warehouse

    Anderson, S.C.; Kupfer, J.A.; Wilson, R.R.; Cooper, R.J.

    2000-01-01

    The purpose of this research was to develop a model that could be used to provide a spatial representation of uneven-aged silvicultural treatments on forest crown area. We began by developing species-specific linear regression equations relating tree DBH to crown area for eight bottomland tree species at White River National Wildlife Refuge, Arkansas, USA. The relationships were highly significant for all species, with coefficients of determination (r(2)) ranging from 0.37 for Ulmus crassifolia to nearly 0.80 for Quercus nuttalliii and Taxodium distichum. We next located and measured the diameters of more than 4000 stumps from a single tree-group selection timber harvest. Stump locations were recorded with respect to an established gl id point system and entered into a Geographic Information System (ARC/INFO). The area occupied by the crown of each logged individual was then estimated by using the stump dimensions (adjusted to DBHs) and the regression equations relating tree DBH to crown area. Our model projected that the selection cuts removed roughly 300 m(2) of basal area from the logged sites resulting in the loss of approximate to 55 000 m(2) of crown area. The model developed in this research represents a tool that can be used in conjunction with remote sensing applications to assist in forest inventory and management, as well as to estimate the impacts of selective timber harvest on wildlife.

  1. Regression analysis for solving diagnosis problem of children's health

    NASA Astrophysics Data System (ADS)

    Cherkashina, Yu A.; Gerget, O. M.

    2016-04-01

    The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.

  2. Fuel Regression Rate Behavior of CAMUI Hybrid Rocket

    NASA Astrophysics Data System (ADS)

    Kaneko, Yudai; Itoh, Mitsunori; Kakikura, Akihito; Mori, Kazuhiro; Uejima, Kenta; Nakashima, Takuji; Wakita, Masashi; Totani, Tsuyoshi; Oshima, Nobuyuki; Nagata, Harunori

    A series of static firing tests was conducted to investigate the fuel regression characteristics of a Cascaded Multistage Impinging-jet (CAMUI) type hybrid rocket motor. A CAMUI type hybrid rocket uses the combination of liquid oxygen and a fuel grain made of polyethylene as a propellant. The collision distance divided by the port diameter, H/D, was varied to investigate the effect of the grain geometry on the fuel regression rate. As a result, the H/D geometry has little effect on the regression rate near the stagnation point, where the heat transfer coefficient is high. On the contrary, the fuel regression rate decreases near the circumference of the forward-end face and the backward-end face of fuel blocks. Besides the experimental approaches, a method of computational fluid dynamics clarified the heat transfer distribution on the grain surface with various H/D geometries. The calculation shows the decrease of the flow velocity due to the increase of H/D on the area where the fuel regression rate decreases with the increase of H/D. To estimate the exact fuel consumption, which is necessary to design a fuel grain, real-time measurement by an ultrasonic pulse-echo method was performed.

  3. Classification of microarray data with penalized logistic regression

    NASA Astrophysics Data System (ADS)

    Eilers, Paul H. C.; Boer, Judith M.; van Ommen, Gert-Jan; van Houwelingen, Hans C.

    2001-06-01

    Classification of microarray data needs a firm statistical basis. In principle, logistic regression can provide it, modeling the probability of membership of a class with (transforms of) linear combinations of explanatory variables. However, classical logistic regression does not work for microarrays, because generally there will be far more variables than observations. One problem is multicollinearity: estimating equations become singular and have no unique and stable solution. A second problem is over-fitting: a model may fit well into a data set, but perform badly when used to classify new data. We propose penalized likelihood as a solution to both problems. The values of the regression coefficients are constrained in a similar way as in ridge regression. All variables play an equal role, there is no ad-hoc selection of most relevant or most expressed genes. The dimension of the resulting systems of equations is equal to the number of variables, and generally will be too large for most computers, but it can dramatically be reduced with the singular value decomposition of some matrices. The penalty is optimized with AIC (Akaike's Information Criterion), which essentially is a measure of prediction performance. We find that penalized logistic regression performs well on a public data set (the MIT ALL/AML data).

  4. copCAR: A Flexible Regression Model for Areal Data

    PubMed Central

    Hughes, John

    2014-01-01

    Non-Gaussian spatial data are common in many fields. When fitting regressions for such data, one needs to account for spatial dependence to ensure reliable inference for the regression coefficients. The two most commonly used regression models for spatially aggregated data are the automodel and the areal generalized linear mixed model (GLMM). These models induce spatial dependence in different ways but share the smoothing approach, which is intuitive but problematic. This article develops a new regression model for areal data. The new model is called copCAR because it is copula-based and employs the areal GLMM’s conditional autoregression (CAR). copCAR overcomes many of the drawbacks of the automodel and the areal GLMM. Specifically, copCAR (1) is flexible and intuitive, (2) permits positive spatial dependence for all types of data, (3) permits efficient computation, and (4) provides reliable spatial regression inference and information about dependence strength. An implementation is provided by R package copCAR, which is available from the Comprehensive R Archive Network, and supplementary materials are available online. PMID:26539023

  5. Regression analysis of cytopathological data

    SciTech Connect

    Whittemore, A.S.; McLarty, J.W.; Fortson, N.; Anderson, K.

    1982-12-01

    Epithelial cells from the human body are frequently labelled according to one of several ordered levels of abnormality, ranging from normal to malignant. The label of the most abnormal cell in a specimen determines the score for the specimen. This paper presents a model for the regression of specimen scores against continuous and discrete variables, as in host exposure to carcinogens. Application to data and tests for adequacy of model fit are illustrated using sputum specimens obtained from a cohort of former asbestos workers.

  6. Dose adjustment strategy of cyclosporine A in renal transplant patients: evaluation of anthropometric parameters for dose adjustment and C0 vs. C2 monitoring in Japan, 2001-2010.

    PubMed

    Kokuhu, Takatoshi; Fukushima, Keizo; Ushigome, Hidetaka; Yoshimura, Norio; Sugioka, Nobuyuki

    2013-01-01

    The optimal use and monitoring of cyclosporine A (CyA) have remained unclear and the current strategy of CyA treatment requires frequent dose adjustment following an empirical initial dosage adjusted for total body weight (TBW). The primary aim of this study was to evaluate age and anthropometric parameters as predictors for dose adjustment of CyA; and the secondary aim was to compare the usefulness of the concentration at predose (C0) and 2-hour postdose (C2) monitoring. An open-label, non-randomized, retrospective study was performed in 81 renal transplant patients in Japan during 2001-2010. The relationships between the area under the blood concentration-time curve (AUC0-9) of CyA and its C0 or C2 level were assessed with a linear regression analysis model. In addition to age, 7 anthropometric parameters were tested as predictors for AUC0-9 of CyA: TBW, height (HT), body mass index (BMI), body surface area (BSA), ideal body weight (IBW), lean body weight (LBW), and fat free mass (FFM). Correlations between AUC0-9 of CyA and these parameters were also analyzed with a linear regression model. The rank order of the correlation coefficient was C0 > C2 (C0; r=0.6273, C2; r=0.5562). The linear regression analyses between AUC0-9 of CyA and candidate parameters indicated their potential usefulness from the following rank order: IBW > FFM > HT > BSA > LBW > TBW > BMI > Age. In conclusion, after oral administration, C2 monitoring has a large variation and could be at high risk for overdosing. Therefore, after oral dosing of CyA, it was not considered to be a useful approach for single monitoring, but should rather be used with C0 monitoring. The regression analyses between AUC0-9 of CyA and anthropometric parameters indicated that IBW was potentially the superior predictor for dose adjustment of CyA in an empiric strategy using TBW (IBW; r=0.5181, TBW; r=0.3192); however, this finding seems to lack the pharmacokinetic rationale and thus warrants further basic and clinical

  7. Correcting Coefficient Alpha for Correlated Errors: Is [alpha][K]a Lower Bound to Reliability?

    ERIC Educational Resources Information Center

    Rae, Gordon

    2006-01-01

    When errors of measurement are positively correlated, coefficient alpha may overestimate the "true" reliability of a composite. To reduce this inflation bias, Komaroff (1997) has proposed an adjusted alpha coefficient, ak. This article shows that ak is only guaranteed to be a lower bound to reliability if the latter does not include correlated…

  8. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

    ERIC Educational Resources Information Center

    Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

    2011-01-01

    The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

  9. Effects of Relational Authenticity on Adjustment to College

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Holman, Rachel L.; Lancaster, Chloe; Gotay, Stephanie G.

    2016-01-01

    The authors examined the association between relational health and student adjustment to college. Data were collected from 138 undergraduate students completing their 1st semester at a large university in the mid-southern United States. Regression analysis indicated that higher levels of relational authenticity were a predictor of success during…

  10. Mediating Effects of Relationships with Mentors on College Adjustment

    ERIC Educational Resources Information Center

    Lenz, A. Stephen

    2014-01-01

    This study examined the relationship between student adjustment to college and relational health with peers, mentors, and the community. Data were collected from 80 undergraduate students completing their first semester of course work at a large university in the mid-South. A series of simultaneous multiple regression analyses indicated that…

  11. Use of age-adjusted rates of suicide in time series studies in Israel.

    PubMed

    Bridges, F Stephen; Tankersley, William B

    2009-01-01

    Durkheim's modified theory of suicide was examined to explore how consistent it was in predicting Israeli rates of suicide from 1965 to 1997 when using age-adjusted rates rather than crude ones. In this time-series study, Israeli male and female rates of suicide increased and decreased, respectively, between 1965 and 1997. Conforming to Durkheim's modified theory, the Israeli male rate of suicide was lower in years when rates of marriage and birth are higher, while rates of suicide are higher in years when rates of divorce are higher, the opposite to that of Israeli women. The corrected regression coefficients suggest that the Israeli female rate of suicide remained lower in years when rate of divorce is higher, again the opposite suggested by Durkheim's modified theory. These results may indicate that divorce affects the mental health of Israeli women as suggested by their lower rate of suicide. Perhaps the "multiple roles held by Israeli females creates suicidogenic stress" and divorce provides some sense of stress relief, mentally speaking. The results were not as consistent with predictions suggested by Durkheim's modified theory of suicide as were rates from the United States for the same period nor were they consistent with rates based on "crude" suicide data. Thus, using age-adjusted rates of suicide had an influence on the prediction of the Israeli rate of suicide during this period.

  12. A technique to measure rotordynamic coefficients in hydrostatic bearings

    NASA Technical Reports Server (NTRS)

    Capaldi, Russell J.

    1993-01-01

    An experimental technique is described for measuring the rotordynamic coefficients of fluid film journal bearings. The bearing tester incorporates a double-spool shaft assembly that permits independent control over the journal spin speed and the frequency of an adjustable-magnitude circular orbit. This configuration yields data that enables determination of the full linear anisotropic rotordynamic coefficient matrices. The dynamic force measurements were made simultaneously with two independent systems, one with piezoelectric load cells and the other with strain gage load cells. Some results are presented for a four-recess, oil-fed hydrostatic journal bearing.

  13. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  14. Multiatlas Segmentation as Nonparametric Regression

    PubMed Central

    Awate, Suyash P.; Whitaker, Ross T.

    2015-01-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator’s convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  15. Variable Selection in ROC Regression

    PubMed Central

    2013-01-01

    Regression models are introduced into the receiver operating characteristic (ROC) analysis to accommodate effects of covariates, such as genes. If many covariates are available, the variable selection issue arises. The traditional induced methodology separately models outcomes of diseased and nondiseased groups; thus, separate application of variable selections to two models will bring barriers in interpretation, due to differences in selected models. Furthermore, in the ROC regression, the accuracy of area under the curve (AUC) should be the focus instead of aiming at the consistency of model selection or the good prediction performance. In this paper, we obtain one single objective function with the group SCAD to select grouped variables, which adapts to popular criteria of model selection, and propose a two-stage framework to apply the focused information criterion (FIC). Some asymptotic properties of the proposed methods are derived. Simulation studies show that the grouped variable selection is superior to separate model selections. Furthermore, the FIC improves the accuracy of the estimated AUC compared with other criteria. PMID:24312135

  16. Fuel Temperature Coefficient of Reactivity

    SciTech Connect

    Loewe, W.E.

    2001-07-31

    A method for measuring the fuel temperature coefficient of reactivity in a heterogeneous nuclear reactor is presented. The method, which is used during normal operation, requires that calibrated control rods be oscillated in a special way at a high reactor power level. The value of the fuel temperature coefficient of reactivity is found from the measured flux responses to these oscillations. Application of the method in a Savannah River reactor charged with natural uranium is discussed.

  17. Diffusion Coefficients in White Dwarfs

    NASA Astrophysics Data System (ADS)

    Saumon, D.; Starrett, C. E.; Daligault, J.

    2015-06-01

    Models of diffusion in white dwarfs universally rely on the coefficients calculated by Paquette et al. (1986). We present new calculations of diffusion coefficients based on an advanced microscopic theory of dense plasmas and a numerical simulation approach that intrinsically accounts for multiple collisions. Our method is validated against a state-of-the-art method and we present results for the diffusion of carbon ions in a helium plasma.

  18. Converting Sabine absorption coefficients to random incidence absorption coefficients.

    PubMed

    Jeong, Cheol-Ho

    2013-06-01

    Absorption coefficients measured by the chamber method are referred to as Sabine absorption coefficients, which sometimes exceed unity due to the finite size of a sample and non-uniform intensity in the reverberation chambers under test. In this study, conversion methods from Sabine absorption coefficients to random incidence absorption coefficients are proposed. The overestimations of the Sabine absorption coefficient are investigated theoretically based on Miki's model for porous absorbers backed by a rigid wall or an air cavity, resulting in conversion factors. Additionally, three optimizations are suggested: An optimization method for the surface impedances for locally reacting absorbers, the flow resistivity for extendedly reacting absorbers, and the flow resistance for fabrics. With four porous type absorbers, the conversion methods are validated. For absorbers backed by a rigid wall, the surface impedance optimization produces the best results, while the flow resistivity optimization also yields reasonable results. The flow resistivity and flow resistance optimization for extendedly reacting absorbers are also found to be successful. However, the theoretical conversion factors based on Miki's model do not guarantee reliable estimations, particularly at frequencies below 250 Hz and beyond 2500 Hz.

  19. Quasi-likelihood estimation for relative risk regression models.

    PubMed

    Carter, Rickey E; Lipsitz, Stuart R; Tilley, Barbara C

    2005-01-01

    For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713). PMID:15618526

  20. Subsea adjustable choke valves

    SciTech Connect

    Cyvas, M.K. )

    1989-08-01

    With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.

  1. Practical Session: Multiple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  2. VARYING COEFFICIENT MODELS FOR DATA WITH AUTO-CORRELATED ERROR PROCESS

    PubMed Central

    Chen, Zhao; Li, Runze; Li, Yan

    2014-01-01

    Varying coefficient model has been popular in the literature. In this paper, we propose a profile least squares estimation procedure to its regression coefficients when its random error is an auto-regressive (AR) process. We further study the asymptotic properties of the proposed procedure, and establish the asymptotic normality for the resulting estimate. We show that the resulting estimate for the regression coefficients has the same asymptotic bias and variance as the local linear estimate for varying coefficient models with independent and identically distributed observations. We apply the SCAD variable selection procedure (Fan and Li, 2001) to reduce model complexity of the AR error process. Numerical comparison and finite sample performance of the resulting estimate are examined by Monte Carlo studies. Our simulation results demonstrate the proposed procedure is much more efficient than the one ignoring the error correlation. The proposed methodology is illustrated by a real data example. PMID:25908899

  3. Transport coefficients of heavy baryons

    NASA Astrophysics Data System (ADS)

    Tolos, Laura; Torres-Rincon, Juan M.; Das, Santosh K.

    2016-08-01

    We compute the transport coefficients (drag and momentum diffusion) of the low-lying heavy baryons Λc and Λb in a medium of light mesons formed at the later stages of high-energy heavy-ion collisions. We employ the Fokker-Planck approach to obtain the transport coefficients from unitarized baryon-meson interactions based on effective field theories that respect chiral and heavy-quark symmetries. We provide the transport coefficients as a function of temperature and heavy-baryon momentum, and analyze the applicability of certain nonrelativistic estimates. Moreover we compare our outcome for the spatial diffusion coefficient to the one coming from the solution of the Boltzmann-Uehling-Uhlenbeck transport equation, and we find a very good agreement between both calculations. The transport coefficients for Λc and Λb in a thermal bath will be used in a subsequent publication as input in a Langevin evolution code for the generation and propagation of heavy particles in heavy-ion collisions at LHC and RHIC energies.

  4. Adolescent Mothers' Adjustment to Parenting.

    ERIC Educational Resources Information Center

    Samuels, Valerie Jarvis; And Others

    1994-01-01

    Examined adolescent mothers' adjustment to parenting, self-esteem, social support, and perceptions of baby. Subjects (n=52) responded to questionnaires at two time periods approximately six months apart. Mothers with higher self-esteem at Time 1 had better adjustment at Time 2. Adjustment was predicted by Time 2 variables; contact with baby's…

  5. Analysis of internal conversion coefficients

    PubMed

    Coursol; Gorozhankin; Yakushev; Briancon; Vylov

    2000-03-01

    An extensive database has been assembled that contains the three most widely used sets of calculated internal conversion coefficients (ICC): [Hager R.S., Seltzer E.C., 1968. Internal conversion tables. K-, L-, M-shell Conversion coefficients for Z = 30 to Z = 103, Nucl. Data Tables A4, 1-237; Band I.M., Trzhaskovskaya M.B., 1978. Tables of gamma-ray internal conversion coefficients for the K-, L- and M-shells, 10 < or = Z < or = 104, Special Report of Leningrad Nuclear Physics Institute; Rosel F., Fries H.M., Alder K., Pauli H.C., 1978. Internal conversion coefficients for all atomic shells, At. Data Nucl. Data Tables 21, 91-289] and also includes new Dirac Fock calculations [Band I.M. and Trzhaskovskaya M.B., 1993. Internal conversion coefficients for low-energy nuclear transitions, At. Data Nucl. Data Tables 55, 43-61]. This database is linked to a computer program to plot ICCs and their combinations (sums and ratios) as a function of Z and energy, as well as relative deviations of ICC or their combinations for any pair of tabulated data. Examples of these analyses are presented for the K-shell and total ICCs of the gamma-ray standards [Hansen H.H., 1985. Evaluation of K-shell and total internal conversion coefficients for some selected nuclear transitions, Eur. Appl. Res. Rept. Nucl. Sci. Tech. 11.6 (4) 777-816] and for the K-shell and total ICCs of high multipolarity transitions (total, K-, L-, M-shells of E3 and M3 and K-shell of M4). Experimental data sets are also compared with the theoretical values of these specific calculations. PMID:10724406

  6. Transport coefficients of gluonic fluid

    SciTech Connect

    Das, Santosh K.; Alam, Jan-e

    2011-06-01

    The shear ({eta}) and bulk ({zeta}) viscous coefficients have been evaluated for a gluonic fluid. The elastic, gg{yields}gg and the inelastic, number nonconserving, gg{yields}ggg processes have been considered as the dominant perturbative processes in evaluating the viscous coefficients to entropy density (s) ratios. Recently the processes: gg{yields}ggg has been revisited and a correction to the widely used Gunion-Bertsch (GB) formula has been obtained. The {eta} and {zeta} have been evaluated for gluonic fluid with the formula recently derived. At large {alpha}{sub s} the value of {eta}/s approaches its lower bound, {approx}1/4{pi}.

  7. Semiparametric regression during 2003–2007*

    PubMed Central

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2010-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

  8. Bayesian median regression for temporal gene expression data

    NASA Astrophysics Data System (ADS)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  9. Area-to-point regression kriging for pan-sharpening

    NASA Astrophysics Data System (ADS)

    Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.

    2016-04-01

    Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.

  10. Developmental Regression in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Rogers, Sally J.

    2004-01-01

    The occurrence of developmental regression in autism is one of the more puzzling features of this disorder. Although several studies have documented the validity of parental reports of regression using home videos, accumulating data suggest that most children who demonstrate regression also demonstrated previous, subtle, developmental differences.…

  11. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  12. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  13. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  14. Streamflow forecasting using functional regression

    NASA Astrophysics Data System (ADS)

    Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.

    2016-07-01

    Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.

  15. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  16. Insulin resistance: regression and clustering.

    PubMed

    Yoon, Sangho; Assimes, Themistocles L; Quertermous, Thomas; Hsiao, Chin-Fu; Chuang, Lee-Ming; Hwu, Chii-Min; Rajaratnam, Bala; Olshen, Richard A

    2014-01-01

    In this paper we try to define insulin resistance (IR) precisely for a group of Chinese women. Our definition deliberately does not depend upon body mass index (BMI) or age, although in other studies, with particular random effects models quite different from models used here, BMI accounts for a large part of the variability in IR. We accomplish our goal through application of Gauss mixture vector quantization (GMVQ), a technique for clustering that was developed for application to lossy data compression. Defining data come from measurements that play major roles in medical practice. A precise statement of what the data are is in Section 1. Their family structures are described in detail. They concern levels of lipids and the results of an oral glucose tolerance test (OGTT). We apply GMVQ to residuals obtained from regressions of outcomes of an OGTT and lipids on functions of age and BMI that are inferred from the data. A bootstrap procedure developed for our family data supplemented by insights from other approaches leads us to believe that two clusters are appropriate for defining IR precisely. One cluster consists of women who are IR, and the other of women who seem not to be. Genes and other features are used to predict cluster membership. We argue that prediction with "main effects" is not satisfactory, but prediction that includes interactions may be. PMID:24887437

  17. Effective Viscosity Coefficient of Nanosuspensions

    NASA Astrophysics Data System (ADS)

    Rudyak, V. Ya.; Belkin, A. A.; Egorov, V. V.

    2008-12-01

    Systematic calculations of the effective viscosity coefficient of nanosuspensions have been performed using the molecular dynamics method. It is established that the viscosity of a nanosuspension depends not only on the volume concentration of the nanoparticles but also on their mass and diameter. Differences from Einstein's relation are found even for nanosuspensions with a low particle concentration.

  18. Aerodynamic coefficients and transformation tables

    NASA Technical Reports Server (NTRS)

    Ames, Joseph S

    1918-01-01

    The problem of the transformation of numerical values expressed in one system of units into another set or system of units frequently arises in connection with aerodynamic problems. Report contains aerodynamic coefficients and conversion tables needed to facilitate such transformation. (author)

  19. Estimating the Polyserial Correlation Coefficient.

    ERIC Educational Resources Information Center

    Bedrick, Edward J.; Breslin, Frederick C.

    1996-01-01

    Simple noniterative estimators of the polyserial correlation coefficient are developed by exploiting a general relationship between the polyserial correlation and the point polyserial correlation to give extensions of the biserial estimators of K. Pearson (1909), H. E. Brogden (1949), and F. M. Lord (1963) to the multicategory setting. (SLD)

  20. Integer Solutions of Binomial Coefficients

    ERIC Educational Resources Information Center

    Gilbertson, Nicholas J.

    2016-01-01

    A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…

  1. Tables of the coefficients A

    NASA Technical Reports Server (NTRS)

    Chandra, N.

    1974-01-01

    Numerical coefficients required to express the angular distribution for the rotationally elastic or inelastic scattering of electrons from a diatomic molecule were tabulated for the case of nitrogen and in the energy range from 0.20 eV to 10.0 eV. Five different rotational states are considered.

  2. Identities for generalized hypergeometric coefficients

    SciTech Connect

    Biedenharn, L.C.; Louck, J.D.

    1991-01-01

    Generalizations of hypergeometric functions to arbitrarily many symmetric variables are discussed, along with their associated hypergeometric coefficients, and the setting within which these generalizations arose. Identities generalizing the Euler identity for {sub 2}F{sub 1}, the Saalschuetz identity, and two generalizations of the {sub 4}F{sub 3} Bailey identity, among others, are given. 16 refs.

  3. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  4. Psychosocial Predictors of Adjustment among First Year College of Education Students

    ERIC Educational Resources Information Center

    Salami, Samuel O.

    2011-01-01

    The purpose of this study was to examine the contribution of psychological and social factors to the prediction of adjustment to college. A total of 250 first year students from colleges of education in Kwara State, Nigeria, completed measures of self-esteem, emotional intelligence, stress, social support and adjustment. Regression analyses…

  5. Influence of Parenting Styles on the Adjustment and Academic Achievement of Traditional College Freshmen.

    ERIC Educational Resources Information Center

    Hickman, Gregory P.; Bartholomae, Suzanne; McKenry, Patrick C.

    2000-01-01

    Examines the relationship between parenting styles and academic achievement and adjustment of traditional college freshmen (N=101). Multiple regression models indicate that authoritative parenting style was positively related to student's academic adjustment. Self-esteem was significantly predictive of social, personal-emotional, goal…

  6. The Impact of Statistically Adjusting for Rater Effects on Conditional Standard Errors of Performance Ratings

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Harik, Polina; Clauser, Brian E.

    2011-01-01

    Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement ("SEM") at specific score levels. In addition, a…

  7. Fully Regressive Melanoma: A Case Without Metastasis.

    PubMed

    Ehrsam, Eric; Kallini, Joseph R; Lebas, Damien; Khachemoune, Amor; Modiano, Philippe; Cotten, Hervé

    2016-08-01

    Fully regressive melanoma is a phenomenon in which the primary cutaneous melanoma becomes completely replaced by fibrotic components as a result of host immune response. Although 10 to 35 percent of cases of cutaneous melanomas may partially regress, fully regressive melanoma is very rare; only 47 cases have been reported in the literature to date. AH of the cases of fully regressive melanoma reported in the literature were diagnosed in conjunction with metastasis on a patient. The authors describe a case of fully regressive melanoma without any metastases at the time of its diagnosis. Characteristic findings on dermoscopy, as well as the absence of melanoma on final biopsy, confirmed the diagnosis. PMID:27672418

  8. Delay Adjusted Incidence Infographic

    Cancer.gov

    This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.

  9. Developmental regression in autism spectrum disorder

    PubMed Central

    Al Backer, Nouf Backer

    2015-01-01

    The occurrence of developmental regression in autism spectrum disorder (ASD) is one of the most puzzling phenomena of this disorder. A little is known about the nature and mechanism of developmental regression in ASD. About one-third of young children with ASD lose some skills during the preschool period, usually speech, but sometimes also nonverbal communication, social or play skills are also affected. There is a lot of evidence suggesting that most children who demonstrate regression also had previous, subtle, developmental differences. It is difficult to predict the prognosis of autistic children with developmental regression. It seems that the earlier development of social, language, and attachment behaviors followed by regression does not predict the later recovery of skills or better developmental outcomes. The underlying mechanisms that lead to regression in autism are unknown. The role of subclinical epilepsy in the developmental regression of children with autism remains unclear. PMID:27493417

  10. Racial identity and reflected appraisals as influences on Asian Americans' racial adjustment.

    PubMed

    Alvarez, A N; Helms, J E

    2001-08-01

    J. E. Helms's (1990) racial identity psychodiagnostic model was used to examine the contribution of racial identity schemas and reflected appraisals to the development of healthy racial adjustment of Asian American university students (N = 188). Racial adjustment was operationally defined as collective self-esteem and awareness of anti-Asian racism. Multiple regression analyses suggested that racial identity schemas and reflected appraisals were significantly predictive of Asian Americans' racial adjustment. Implications for counseling and future research are discussed.

  11. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics

  12. Time series regression model for infectious disease and weather.

    PubMed

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context.

  13. Determination of airplane model structure from flight data by using modified stepwise regression

    NASA Technical Reports Server (NTRS)

    Klein, V.; Batterson, J. G.; Murphy, P. C.

    1981-01-01

    The linear and stepwise regressions are briefly introduced, then the problem of determining airplane model structure is addressed. The MSR was constructed to force a linear model for the aerodynamic coefficient first, then add significant nonlinear terms and delete nonsignificant terms from the model. In addition to the statistical criteria in the stepwise regression, the prediction sum of squares (PRESS) criterion and the analysis of residuals were examined for the selection of an adequate model. The procedure is used in examples with simulated and real flight data. It is shown that the MSR performs better than the ordinary stepwise regression and that the technique can also be applied to the large amplitude maneuvers.

  14. Study of Dispersion Coefficient Channel

    NASA Astrophysics Data System (ADS)

    Akiyama, K. R.; Bressan, C. K.; Pires, M. S. G.; Canno, L. M.; Ribeiro, L. C. L. J.

    2016-08-01

    The issue of water pollution has worsened in recent times due to releases, intentional or not, of pollutants in natural water bodies. This causes several studies about the distribution of pollutants are carried out. The water quality models have been developed and widely used today as a preventative tool, ie to try to predict what will be the concentration distribution of constituent along a body of water in spatial and temporal scale. To understand and use such models, it is necessary to know some concepts of hydraulic high on their application, including the longitudinal dispersion coefficient. This study aims to conduct a theoretical and experimental study of the channel dispersion coefficient, yielding more information about their direct determination in the literature.

  15. Consistent transport coefficients in astrophysics

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Rovira, M.; Ferrofontan, C.

    1986-01-01

    A consistent theory for dealing with transport phenomena in stellar atmospheres starting with the kinetic equations and introducing three cases (LTE, partial LTE, and non-LTE) was developed. The consistent hydrodynamical equations were presented for partial-LTE, the transport coefficients defined, and a method shown to calculate them. The method is based on the numerical solution of kinetic equations considering Landau, Boltzmann, and Focker-Planck collision terms. Finally a set of results for the transport coefficients derived for a partially ionized hydrogen gas with radiation was shown, considering ionization and recombination as well as elastic collisions. The results obtained imply major changes is some types of theoretical model calculations and can resolve some important current problems concerning energy and mass balance in the solar atmosphere. It is shown that energy balance in the lower solar transition region can be fully explained by means of radiation losses and conductive flux.

  16. High temperature Seebeck coefficient metrology

    SciTech Connect

    Martin, J.; Tritt, T.; Uher, C.

    2010-12-15

    We present an overview of the challenges and practices of thermoelectric metrology on bulk materials at high temperature (300 to 1300 K). The Seebeck coefficient, when combined with thermal and electrical conductivity, is an essential property measurement for evaluating the potential performance of novel thermoelectric materials. However, there is some question as to which measurement technique(s) provides the most accurate determination of the Seebeck coefficient at high temperature. This has led to the implementation of nonideal practices that have further complicated the confirmation of reported high ZT materials. To ensure meaningful interlaboratory comparison of data, thermoelectric measurements must be reliable, accurate, and consistent. This article will summarize and compare the relevant measurement techniques and apparatus designs required to effectively manage uncertainty, while also providing a reference resource of previous advances in high temperature thermoelectric metrology.

  17. Portable vapor diffusion coefficient meter

    DOEpatents

    Ho, Clifford K.

    2007-06-12

    An apparatus for measuring the effective vapor diffusion coefficient of a test vapor diffusing through a sample of porous media contained within a test chamber. A chemical sensor measures the time-varying concentration of vapor that has diffused a known distance through the porous media. A data processor contained within the apparatus compares the measured sensor data with analytical predictions of the response curve based on the transient diffusion equation using Fick's Law, iterating on the choice of an effective vapor diffusion coefficient until the difference between the predicted and measured curves is minimized. Optionally, a purge fluid can forced through the porous media, permitting the apparatus to also measure a gas-phase permeability. The apparatus can be made lightweight, self-powered, and portable for use in the field.

  18. Measurement of reaeration coefficients for selected Florida streams

    USGS Publications Warehouse

    Hampson, P.S.; Coffin, J.E.

    1989-01-01

    A total of 29 separate reaeration coefficient determinations were performed on 27 subreaches of 12 selected Florida streams between October 1981 and May 1985. Measurements performed prior to June 1984 were made using the peak and area methods with ethylene and propane as the tracer gases. Later measurements utilized the steady-state method with propane as the only tracer gas. The reaeration coefficients ranged from 1.07 to 45.9 days with a mean estimated probable error of +/16.7%. Ten predictive equations (compiled from the literature) were also evaluated using the measured coefficients. The most representative equation was one of the energy dissipation type with a standard error of 60.3%. Seven of the 10 predictive additional equations were modified using the measured coefficients and nonlinear regression techniques. The most accurate of the developed equations was also of the energy dissipation form and had a standard error of 54.9%. For 5 of the 13 subreaches in which both ethylene and propane were used, the ethylene data resulted in substantially larger reaeration coefficient values which were rejected. In these reaches, ethylene concentrations were probably significantly affected by one or more electrophilic addition reactions known to occur in aqueous media. (Author 's abstract)

  19. Ionization coefficients in gas mixtures

    NASA Astrophysics Data System (ADS)

    Marić, D.; Šašić, O.; Jovanović, J.; Radmilović-Rađenović, M.; Petrović, Z. Lj.

    2007-03-01

    We have tested the application of the common E/N ( E—electric field, N—gas number density) or Wieland approximation [Van Brunt, R.J., 1987. Common parametrizations of electron transport, collision cross section, and dielectric strength data for binary gas mixtures. J. Appl. Phys. 61 (5), 1773-1787.] and the common mean energy (CME) combination of the data for pure gases to obtain ionization coefficients for mixtures. Test calculations were made for Ar-CH4, Ar-N2, He-Xe and CH4-N2 mixtures. Standard combination procedure gives poor results in general, due to the fact that the electron energy distribution is considerably different in mixtures and in individual gases at the same values of E/N. The CME method may be used for mixtures of gases with ionization coefficients that do not differ by more than two orders of magnitude which is better than any other technique that was proposed [Marić, D., Radmilović-Rađenović, M., Petrović, Z.Lj., 2005. On parametrization and mixture laws for electron ionization coefficients. Eur. Phys. J. D 35, 313-321.].

  20. The interpretation of selection coefficients.

    PubMed

    Barton, N H; Servedio, M R

    2015-05-01

    Evolutionary biologists have an array of powerful theoretical techniques that can accurately predict changes in the genetic composition of populations. Changes in gene frequencies and genetic associations between loci can be tracked as they respond to a wide variety of evolutionary forces. However, it is often less clear how to decompose these various forces into components that accurately reflect the underlying biology. Here, we present several issues that arise in the definition and interpretation of selection and selection coefficients, focusing on insights gained through the examination of selection coefficients in multilocus notation. Using this notation, we discuss how its flexibility-which allows different biological units to be identified as targets of selection-is reflected in the interpretation of the coefficients that the notation generates. In many situations, it can be difficult to agree on whether loci can be considered to be under "direct" versus "indirect" selection, or to quantify this selection. We present arguments for what the terms direct and indirect selection might best encompass, considering a range of issues, from viability and sexual selection to kin selection. We show how multilocus notation can discriminate between direct and indirect selection, and describe when it can do so. PMID:25790030

  1. Incremental Aerodynamic Coefficient Database for the USA2

    NASA Technical Reports Server (NTRS)

    Richardson, Annie Catherine

    2016-01-01

    In March through May of 2016, a wind tunnel test was conducted by the Aerosciences Branch (EV33) to visually study the unsteady aerodynamic behavior over multiple transition geometries for the Universal Stage Adapter 2 (USA2) in the MSFC Aerodynamic Research Facility's Trisonic Wind Tunnel (TWT). The purpose of the test was to make a qualitative comparison of the transonic flow field in order to provide a recommended minimum transition radius for manufacturing. Additionally, 6 Degree of Freedom force and moment data for each configuration tested was acquired in order to determine the geometric effects on the longitudinal aerodynamic coefficients (Normal Force, Axial Force, and Pitching Moment). In order to make a quantitative comparison of the aerodynamic effects of the USA2 transition geometry, the aerodynamic coefficient data collected during the test was parsed and incorporated into a database for each USA2 configuration tested. An incremental aerodynamic coefficient database was then developed using the generated databases for each USA2 geometry as a function of Mach number and angle of attack. The final USA2 coefficient increments will be applied to the aerodynamic coefficients of the baseline geometry to adjust the Space Launch System (SLS) integrated launch vehicle force and moment database based on the transition geometry of the USA2.

  2. Urinary trace element concentrations in environmental settings: is there a value for systematic creatinine adjustment or do we introduce a bias?

    PubMed

    Hoet, Perrine; Deumer, Gladys; Bernard, Alfred; Lison, Dominique; Haufroid, Vincent

    2016-01-01

    Systematic creatinine adjustment of urinary concentrations of biomarkers has been a challenge over the past years because the assumption of a constant creatinine excretion rate appears erroneous and the issue of overadjustment has recently emerged. This study aimed at determining whether systematic creatinine adjustment is to be recommended for urinary concentrations of trace elements (TEs) in environmental settings. Paired 24-h collection and random spot urine samples (spotU) were obtained from 39 volunteers not occupationally exposed to TEs. Four models to express TEs concentration in spotU were tested to predict the 24-h excretion rate of these TEs (TEμg/24h) considered as the gold standard reference: absolute concentration (TEμg/l); ratio to creatinine (TEμg/gcr); TEμg/gcr adjusted to creatinine (TEμg/gcr-adj); and concentration adjusted to specific gravity (TEμg/l-SG). As, Ba, Cd, Co, Cr, Cu, Hg, Li, Mo, Ni, Pb, Sn, Sb, Se, Te, V and Zn were analyzed by inductively coupled argon plasma mass spectrometry. There was no single pattern of relationship between urinary TEs concentrations in spotU and TEμg/24h. TEμg/l predicted TEμg/24h with an explained variance ranging from 0 to 60%. Creatinine adjustment improved the explained variance by an additional 5 to ~60% for many TEs, but with a risk of overadjustment for the most of them. This issue could be addressed by adjusting TE concentrations on the basis of the regression coefficient of the relationship between TEμg/gcr and creatinine concentration. SG adjustment was as suitable as creatinine adjustment to predict TEμg/24h with no SG-overadjustment (except V). Regarding Cd, Cr, Cu, Ni and Te, none of the models were found to reflect TEμg/24h. In the context of environmental exposure, systematic creatinine adjustment is not recommended for urinary concentrations of TEs. SG adjustment appears to be a more reliable alternative. For some TEs, however, neither methods appear suitable.

  3. Improved Estimation of Earth Rotation Parameters Using the Adaptive Ridge Regression

    NASA Astrophysics Data System (ADS)

    Huang, Chengli; Jin, Wenjing

    1998-05-01

    The multicollinearity among regression variables is a common phenomenon in the reduction of astronomical data. The phenomenon of multicollinearity and the diagnostic factors are introduced first. As a remedy, a new method, called adaptive ridge regression (ARR), which is an improved method of choosing the departure constant θ in ridge regression, is suggested and applied in a case that the Earth orientation parameters (EOP) are determined by lunar laser ranging (LLR). It is pointed out, via a diagnosis, the variance inflation factors (VIFs), that there exists serious multicollinearity among the regression variables. It is shown that the ARR method is effective in reducing the multicollinearity and makes the regression coefficients more stable than that of using ordinary least squares estimation (LS), especially when there is serious multicollinearity.

  4. Life Events, Sibling Warmth, and Youths' Adjustment.

    PubMed

    Waite, Evelyn B; Shanahan, Lilly; Calkins, Susan D; Keane, Susan P; O'Brien, Marion

    2011-10-01

    Sibling warmth has been identified as a protective factor from life events, but stressor-support match-mismatch and social domains perspectives suggest that sibling warmth may not efficiently protect youths from all types of life events. We tested whether sibling warmth moderated the association between each of family-wide, youths' personal, and siblings' personal life events and both depressive symptoms and risk-taking behaviors. Participants were 187 youths aged 9-18 (M = 11.80 years old, SD = 2.05). Multiple regression models revealed that sibling warmth was a protective factor from depressive symptoms for family-wide events, but not for youths' personal and siblings' personal life events. Findings highlight the importance of contextualizing protective functions of sibling warmth by taking into account the domains of stressors and adjustment. PMID:22241934

  5. LRGS: Linear Regression by Gibbs Sampling

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2016-02-01

    LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

  6. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  7. Process modeling with the regression network.

    PubMed

    van der Walt, T; Barnard, E; van Deventer, J

    1995-01-01

    A new connectionist network topology called the regression network is proposed. The structural and underlying mathematical features of the regression network are investigated. Emphasis is placed on the intricacies of the optimization process for the regression network and some measures to alleviate these difficulties of optimization are proposed and investigated. The ability of the regression network algorithm to perform either nonparametric or parametric optimization, as well as a combination of both, is also highlighted. It is further shown how the regression network can be used to model systems which are poorly understood on the basis of sparse data. A semi-empirical regression network model is developed for a metallurgical processing operation (a hydrocyclone classifier) by building mechanistic knowledge into the connectionist structure of the regression network model. Poorly understood aspects of the process are provided for by use of nonparametric regions within the structure of the semi-empirical connectionist model. The performance of the regression network model is compared to the corresponding generalization performance results obtained by some other nonparametric regression techniques.

  8. Geodesic least squares regression on information manifolds

    SciTech Connect

    Verdoolaege, Geert

    2014-12-05

    We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.

  9. Mood Adjustment via Mass Communication.

    ERIC Educational Resources Information Center

    Knobloch, Silvia

    2003-01-01

    Proposes and experimentally tests mood adjustment approach, complementing mood management theory. Discusses how results regarding self-exposure across time show that patterns of popular music listening among a group of undergraduate students differ with initial mood and anticipation, lending support to mood adjustment hypotheses. Describes how…

  10. Spousal Adjustment to Myocardial Infarction.

    ERIC Educational Resources Information Center

    Ziglar, Elisa J.

    This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…

  11. Data correction for seven activity trackers based on regression models.

    PubMed

    Andalibi, Vafa; Honko, Harri; Christophe, Francois; Viik, Jari

    2015-08-01

    Using an activity tracker for measuring activity-related parameters, e.g. steps and energy expenditure (EE), can be very helpful in assisting a person's fitness improvement. Unlike the measuring of number of steps, an accurate EE estimation requires additional personal information as well as accurate velocity of movement, which is hard to achieve due to inaccuracy of sensors. In this paper, we have evaluated regression-based models to improve the precision for both steps and EE estimation. For this purpose, data of seven activity trackers and two reference devices was collected from 20 young adult volunteers wearing all devices at once in three different tests, namely 60-minute office work, 6-hour overall activity and 60-minute walking. Reference data is used to create regression models for each device and relative percentage errors of adjusted values are then statistically compared to that of original values. The effectiveness of regression models are determined based on the result of a statistical test. During a walking period, EE measurement was improved in all devices. The step measurement was also improved in five of them. The results show that improvement of EE estimation is possible only with low-cost implementation of fitting model over the collected data e.g. in the app or in corresponding service back-end. PMID:26736578

  12. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    ERIC Educational Resources Information Center

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  13. Parental Divorce and Children's Adjustment.

    PubMed

    Lansford, Jennifer E

    2009-03-01

    This article reviews the research literature on links between parental divorce and children's short-term and long-term adjustment. First, I consider evidence regarding how divorce relates to children's externalizing behaviors, internalizing problems, academic achievement, and social relationships. Second, I examine timing of the divorce, demographic characteristics, children's adjustment prior to the divorce, and stigmatization as moderators of the links between divorce and children's adjustment. Third, I examine income, interparental conflict, parenting, and parents well-being as mediators of relations between divorce and children's adjustment. Fourth, I note the caveats and limitations of the research literature. Finally, I consider notable policies related to grounds for divorce, child support, and child custody in light of how they might affect children s adjustment to their parents divorce.

  14. A Modified Gauss-Jordan Procedure as an Alternative to Iterative Procedures in Multiple Regression.

    ERIC Educational Resources Information Center

    Roscoe, John T.; Kittleson, Howard M.

    Correlation matrices involving linear dependencies are common in educational research. In such matrices, there is no unique solution for the multiple regression coefficients. Although computer programs using iterative techniques are used to overcome this problem, these techniques possess certain disadvantages. Accordingly, a modified Gauss-Jordan…

  15. Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1995-01-01

    The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…

  16. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  17. Generalized single-hidden layer feedforward networks for regression problems.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2015-06-01

    In this paper, traditional single-hidden layer feedforward network (SLFN) is extended to novel generalized SLFN (GSLFN) by employing polynomial functions of inputs as output weights connecting randomly generated hidden units with corresponding output nodes. The significant contributions of this paper are as follows: 1) a primal GSLFN (P-GSLFN) is implemented using randomly generated hidden nodes and polynomial output weights whereby the regression matrix is augmented by full or partial input variables and only polynomial coefficients are to be estimated; 2) a simplified GSLFN (S-GSLFN) is realized by decomposing the polynomial output weights of the P-GSLFN into randomly generated polynomial nodes and tunable output weights; 3) both P- and S-GSLFN are able to achieve universal approximation if the output weights are tuned by ridge regression estimators; and 4) by virtue of the developed batch and online sequential ridge ELM (BR-ELM and OSR-ELM) learning algorithms, high performance of the proposed GSLFNs in terms of generalization and learning speed is guaranteed. Comprehensive simulation studies and comparisons with standard SLFNs are carried out on real-world regression benchmark data sets. Simulation results demonstrate that the innovative GSLFNs using BR-ELM and OSR-ELM are superior to standard SLFNs in terms of accuracy, training speed, and structure compactness.

  18. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  19. Adjustment versus no adjustment when using adjustable sutures in strabismus surgery

    PubMed Central

    Liebermann, Laura; Hatt, Sarah R.; Leske, David A.; Holmes, Jonathan M.

    2013-01-01

    Purpose To compare long-term postoperative outcomes when performing an adjustment to achieve a desired immediate postoperative alignment versus simply tying off at the desired immediate postoperative alignment when using adjustable sutures for strabismus surgery. Methods We retrospectively identified 89 consecutive patients who underwent a reoperation for horizontal strabismus using adjustable sutures and also had a 6-week and 1-year outcome examination. In each case, the intent of the surgeon was to tie off and only to adjust if the patient was not within the intended immediate postoperative range. Postoperative success was predefined based on angle of misalignment and diplopia at distance and near. Results Of the 89 patients, 53 (60%) were adjusted and 36 (40%) were tied off. Success rates were similar between patients who were simply tied off immediately after surgery and those who were adjusted. At 6 weeks, the success rate was 64% for the nonadjusted group versus 81% for the adjusted group (P = 0.09; difference of 17%; 95% CI, −2% to 36%). At 1 year, the success rate was 67% for the nonadjusted group versus 77% for the adjusted group (P = 0.3; difference of 11%; 95% CI, −8% to 30%). Conclusions Performing an adjustment to obtain a desired immediate postoperative alignment did not yield inferior long-term outcomes to those obtained by tying off to obtain that initial alignment. If patients were who were outside the desired immediate postoperative range had not been not adjusted, it is possible that their long-term outcomes would have been worse, therefore, overall, an adjustable approach may be superior to a nonadjustable approach. PMID:23415035

  20. Risk-adjusted monitoring of survival times.

    PubMed

    Sego, Landon H; Reynolds, Marion R; Woodall, William H

    2009-04-30

    We consider the monitoring of surgical outcomes, where each patient has a different risk of post-operative mortality due to risk factors that exist prior to the surgery. We propose a risk-adjusted (RA) survival time CUSUM chart (RAST CUSUM) for monitoring a continuous, time-to-event variable that may be right-censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart with the RA Bernoulli CUSUM chart using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden increase in the odds of mortality than the RA Bernoulli CUSUM chart, especially when the fraction of censored observations is relatively low or when a small increase in the odds of mortality occurs. We also discuss the impact of the amount of training data used to estimate chart parameters as well as the implementation of the RAST CUSUM chart during prospective monitoring.

  1. Fitts' Law in early postural adjustments.

    PubMed

    Bertucco, M; Cesari, P; Latash, M L

    2013-02-12

    We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts' Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantial Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts' Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560

  2. Fitts’ Law in Early Postural Adjustments

    PubMed Central

    Bertucco, M.; Cesari, P.; Latash, M.L

    2012-01-01

    We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts’ Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantiual Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts’ Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560

  3. Measurements of thermal accommodation coefficients.

    SciTech Connect

    Rader, Daniel John; Castaneda, Jaime N.; Torczynski, John Robert; Grasser, Thomas W.; Trott, Wayne Merle

    2005-10-01

    A previously-developed experimental facility has been used to determine gas-surface thermal accommodation coefficients from the pressure dependence of the heat flux between parallel plates of similar material but different surface finish. Heat flux between the plates is inferred from measurements of temperature drop between the plate surface and an adjacent temperature-controlled water bath. Thermal accommodation measurements were determined from the pressure dependence of the heat flux for a fixed plate separation. Measurements of argon and nitrogen in contact with standard machined (lathed) or polished 304 stainless steel plates are indistinguishable within experimental uncertainty. Thus, the accommodation coefficient of 304 stainless steel with nitrogen and argon is estimated to be 0.80 {+-} 0.02 and 0.87 {+-} 0.02, respectively, independent of the surface roughness within the range likely to be encountered in engineering practice. Measurements of the accommodation of helium showed a slight variation with 304 stainless steel surface roughness: 0.36 {+-} 0.02 for a standard machine finish and 0.40 {+-} 0.02 for a polished finish. Planned tests with carbon-nanotube-coated plates will be performed when 304 stainless-steel blanks have been successfully coated.

  4. Suppression Situations in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  5. Deriving the Regression Equation without Using Calculus

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    2004-01-01

    Probably the one "new" mathematical topic that is most responsible for modernizing courses in college algebra and precalculus over the last few years is the idea of fitting a function to a set of data in the sense of a least squares fit. Whether it be simple linear regression or nonlinear regression, this topic opens the door to applying the…

  6. A Practical Guide to Regression Discontinuity

    ERIC Educational Resources Information Center

    Jacob, Robin; Zhu, Pei; Somers, Marie-Andrée; Bloom, Howard

    2012-01-01

    Regression discontinuity (RD) analysis is a rigorous nonexperimental approach that can be used to estimate program impacts in situations in which candidates are selected for treatment based on whether their value for a numeric rating exceeds a designated threshold or cut-point. Over the last two decades, the regression discontinuity approach has…

  7. Dealing with Outliers: Robust, Resistant Regression

    ERIC Educational Resources Information Center

    Glasser, Leslie

    2007-01-01

    Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…

  8. Regression Analysis: Legal Applications in Institutional Research

    ERIC Educational Resources Information Center

    Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.

    2008-01-01

    This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…

  9. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  10. Illustration of Regression towards the Means

    ERIC Educational Resources Information Center

    Govindaraju, K.; Haslett, S. J.

    2008-01-01

    This article presents a procedure for generating a sequence of data sets which will yield exactly the same fitted simple linear regression equation y = a + bx. Unless rescaled, the generated data sets will have progressively smaller variability for the two variables, and the associated response and covariate will "regress" towards their…

  11. Regression Analysis and the Sociological Imagination

    ERIC Educational Resources Information Center

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  12. Three-Dimensional Modeling in Linear Regression.

    ERIC Educational Resources Information Center

    Herman, James D.

    Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…

  13. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  14. Applying land use regression model to estimate spatial variation of PM₂.₅ in Beijing, China.

    PubMed

    Wu, Jiansheng; Li, Jiacheng; Peng, Jian; Li, Weifeng; Xu, Guang; Dong, Chengcheng

    2015-05-01

    Fine particulate matter (PM2.5) is the major air pollutant in Beijing, posing serious threats to human health. Land use regression (LUR) has been widely used in predicting spatiotemporal variation of ambient air-pollutant concentrations, though restricted to the European and North American context. We aimed to estimate spatiotemporal variations of PM2.5 by building separate LUR models in Beijing. Hourly routine PM2.5 measurements were collected at 35 sites from 4th March 2013 to 5th March 2014. Seventy-seven predictor variables were generated in GIS, including street network, land cover, population density, catering services distribution, bus stop density, intersection density, and others. Eight LUR models were developed on annual, seasonal, peak/non-peak, and incremental concentration subsets. The annual mean concentration across all sites is 90.7 μg/m(3) (SD = 13.7). PM2.5 shows more temporal variation than spatial variation, indicating the necessity of building different models to capture spatiotemporal trends. The adjusted R (2) of these models range between 0.43 and 0.65. Most LUR models are driven by significant predictors including major road length, vegetation, and water land use. Annual outdoor exposure in Beijing is as high as 96.5 μg/m(3). This is among the first LUR studies implemented in a seriously air-polluted Chinese context, which generally produce acceptable results and reliable spatial air-pollution maps. Apart from the models for winter and incremental concentration, LUR models are driven by similar variables, suggesting that the spatial variations of PM2.5 remain steady for most of the time. Temporal variations are explained by the intercepts, and spatial variations in the measurements determine the strength of variable coefficients in our models. PMID:25487555

  15. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    SciTech Connect

    B. Bullard

    1999-05-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4.

  16. Upscaling the overland flow resistance coefficient for vegetated surfaces

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Katopodes, N.

    2011-12-01

    flow bottom conditions. Two methods, i.e., a "hydrograph analysis" and a "dynamic wave analysis" were used to obtain the upscaled value of Manning's coefficient. A generic regression equation has been derived by using the dimensional analysis and the multiple regression method (R2 = XXX). It allows the derivation of the Manning coefficient for arbitrary flow and surface conditions for sloping vegetated surfaces. Particular properties of the roughness coefficient dependence on contributing variables will be also discussed.

  17. Nodule Regression in Adults With Nodular Gastritis

    PubMed Central

    Kim, Ji Wan; Lee, Sun-Young; Kim, Jeong Hwan; Sung, In-Kyung; Park, Hyung Seok; Shim, Chan-Sup; Han, Hye Seung

    2015-01-01

    Background Nodular gastritis (NG) is associated with the presence of Helicobacter pylori infection, but there are controversies on nodule regression in adults. The aim of this study was to analyze the factors that are related to the nodule regression in adults diagnosed as NG. Methods Adult population who were diagnosed as NG with H. pylori infection during esophagogastroduodenoscopy (EGD) at our center were included. Changes in the size and location of the nodules, status of H. pylori infection, upper gastrointestinal (UGI) symptom, EGD and pathology findings were analyzed between the initial and follow-up tests. Results Of the 117 NG patients, 66.7% (12/18) of the eradicated NG patients showed nodule regression after H. pylori eradication, whereas 9.9% (9/99) of the non-eradicated NG patients showed spontaneous nodule regression without H. pylori eradication (P < 0.001). Nodule regression was more frequent in NG patients with antral nodule location (P = 0.010), small-sized nodules (P = 0.029), H. pylori eradication (P < 0.001), UGI symptom (P = 0.007), and a long-term follow-up period (P = 0.030). On the logistic regression analysis, nodule regression was inversely correlated with the persistent H. pylori infection on the follow-up test (odds ratio (OR): 0.020, 95% confidence interval (CI): 0.003 - 0.137, P < 0.001) and short-term follow-up period < 30.5 months (OR: 0.140, 95% CI: 0.028 - 0.700, P = 0.017). Conclusions In adults with NG, H. pylori eradication is the most significant factor associated with nodule regression. Long-term follow-up period is also correlated with nodule regression, but is less significant than H. pylori eradication. Our findings suggest that H. pylori eradication should be considered to promote nodule regression in NG patients with H. pylori infection.

  18. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  19. Logistic regression by means of evolutionary radial basis function neural networks.

    PubMed

    Gutierrez, Pedro Antonio; Hervas-Martinez, César; Martinez-Estudillo, Francisco J

    2011-02-01

    This paper proposes a hybrid multilogistic methodology, named logistic regression using initial and radial basis function (RBF) covariates. The process for obtaining the coefficients is carried out in three steps. First, an evolutionary programming (EP) algorithm is applied, in order to produce an RBF neural network (RBFNN) with a reduced number of RBF transformations and the simplest structure possible. Then, the initial attribute space (or, as commonly known as in logistic regression literature, the covariate space) is transformed by adding the nonlinear transformations of the input variables given by the RBFs of the best individual in the final generation. Finally, a maximum likelihood optimization method determines the coefficients associated with a multilogistic regression model built in this augmented covariate space. In this final step, two different multilogistic regression algorithms are applied: one considers all initial and RBF covariates (multilogistic initial-RBF regression) and the other one incrementally constructs the model and applies cross validation, resulting in an automatic covariate selection [simplelogistic initial-RBF regression (SLIRBF)]. Both methods include a regularization parameter, which has been also optimized. The methodology proposed is tested using 18 benchmark classification problems from well-known machine learning problems and two real agronomical problems. The results are compared with the corresponding multilogistic regression methods applied to the initial covariate space, to the RBFNNs obtained by the EP algorithm, and to other probabilistic classifiers, including different RBFNN design methods [e.g., relaxed variable kernel density estimation, support vector machines, a sparse classifier (sparse multinomial logistic regression)] and a procedure similar to SLIRBF but using product unit basis functions. The SLIRBF models are found to be competitive when compared with the corresponding multilogistic regression methods and the

  20. Life adjustment correlates of physical self-concepts.

    PubMed

    Sonstroem, R J; Potts, S A

    1996-05-01

    This research tested relationships between physical self-concepts and contemporary measures of life adjustment. University students (119 females, 126 males) completed the Physical Self-Perception Profile assessing self-concepts of sport competence, physical condition, attractive body, strength, and general physical self-worth. Multiple regression found significant associations (P < 0.05 to P < 0.001) in hypothesized directions between physical self-concepts and positive affect, negative affect, depression, and health complaints in 17 of 20 analyses. Thirteen of these relationships remained significant when controlling for the Bonferroni effect. Hierarchical multiple regression examined the unique contribution of physical self-perceptions in predicting each adjustment variable after accounting for the effects of global self-esteem and two measures of social desirability. Physical self-concepts significantly improved associations with life adjustment (P < 0.05 to P < 0.05) in three of the eight analyses across gender and approached significance in three others. These data demonstrate that self-perceptions of physical competence in college students are essentially related to life adjustment, independent of the effects of social desirability and global self-esteem. These links are mainly with perceptions of sport competence in males and with perceptions of physical condition, attractive body, and general physical self-worth in both males and females. PMID:9148094

  1. Exploring Mexican American adolescent romantic relationship profiles and adjustment.

    PubMed

    Moosmann, Danyel A V; Roosa, Mark W

    2015-08-01

    Although Mexican Americans are the largest ethnic minority group in the nation, knowledge is limited regarding this population's adolescent romantic relationships. This study explored whether 12th grade Mexican Americans' (N = 218; 54% female) romantic relationship characteristics, cultural values, and gender created unique latent classes and if so, whether they were linked to adjustment. Latent class analyses suggested three profiles including, relatively speaking, higher, satisfactory, and lower quality romantic relationships. Regression analyses indicated these profiles had distinct associations with adjustment. Specifically, adolescents with higher and satisfactory quality romantic relationships reported greater future family expectations, higher self-esteem, and fewer externalizing symptoms than those with lower quality romantic relationships. Similarly, adolescents with higher quality romantic relationships reported greater academic self-efficacy and fewer sexual partners than those with lower quality romantic relationships. Overall, results suggested higher quality romantic relationships were most optimal for adjustment. Future research directions and implications are discussed. PMID:26141198

  2. Integrating Risk Adjustment and Enrollee Premiums in Health Plan Payment

    PubMed Central

    McGuire, Thomas G.; Glazer, Jacob; Newhouse, Joseph P.; Normand, Sharon-Lise; Shi, Julie; Sinaiko, Anna D.; Zuvekas, Samuel

    2013-01-01

    In two important health policy contexts – private plans in Medicare and the new state-run “Exchanges” created as part of the Affordable Care Act (ACA) – plan payments come from two sources: risk-adjusted payments from a Regulator and premiums charged to individual enrollees. This paper derives principles for integrating risk-adjusted payments and premium policy in individual health insurance markets based on fitting total plan payments to health plan costs per person as closely as possible. A least squares regression including both health status and variables used in premiums reveals the weights a Regulator should put on risk adjusters when markets determine premiums. We apply the methods to an Exchange-eligible population drawn from the Medical Expenditure Panel Survey (MEPS). PMID:24308878

  3. The determination of transpiration efficiency coefficient for common bean

    NASA Astrophysics Data System (ADS)

    Ogindo, H. O.; Walker, S.

    A number of studies have been conducted to determine species specific transpiration efficiency coefficient. Although the value is available for some C3 legumes, no value has been determined for common beans within the semi-arid tropics. The coefficient is useful in modelling crop water use as it has been found to be conservative over a range of climates when differences in vapour pressure deficits are accounted for. The objective of the experiment was to determine the transpiration efficiency coefficient for common beans for use in modelling within the semi-arid region of South Africa. Common bean ( Phaseoulus vulgaris L.) was grown on a weighing lysimeter during the 2000/2001 and 2001/2002 seasons. Transpiration was measured on hourly basis using the weighing lysimeter and the data integrated over the growing season to determine the seasonal transpiration for the crop. At the same time hourly measurement of canopy vapour pressure deficit was made using wet and dry bulb resistance thermometers housed in mini-shelters at 200-400 mm height. Wet and dry bulb temperature data was also collected at the nearby standard automatic weather station and used to normalize the transpiration efficiency. Transpiration efficiency for the common bean was 1.33 and 1.93 g kg -1 which when normalized and root adjusted, gave a transpiration efficiency coefficient of 3.02 and 3.51 g kPa kg -1 for the 2000/2001 and 2001/2002 seasons respectively. A mean transpiration efficiency coefficient of 3.26 ± 0.25 g kPa kg -1 was adopted for the two seasons. This value is fairly consistent with those obtained for other C3 legumes species, confirming the conservativeness of the coefficient and therefore its usefulness as modelling parameter.

  4. Genetic value of herd life adjusted for milk production.

    PubMed

    Allaire, F R; Gibson, J P

    1992-05-01

    Cow herd life adjusted for lactational milk production was investigated as a genetic trait in the breeding objective. Under a simple model, the relative economic weight of milk to adjusted herd life on a per genetic standard deviation basis was equal to CVY/dCVL where CVY and CVL are the genetic coefficients of variation of milk production and adjusted herd life, respectively, and d is the depreciation per year per cow divided by the total fixed costs per year per cow. The relative economic value of milk to adjusted herd life at the prices and parameters for North America was about 3.2. An increase of 100-kg milk was equivalent to 2.2 mo of adjusted herd life. Three to 7% lower economic gain is expected when only improved milk production is sought compared with a breeding objective that included both production and adjusted herd life for relative value changed +/- 20%. A favorable economic gain to cost ratio probably exists for herd life used as a genetic trait to supplement milk in the breeding objective. Cow survival records are inexpensive, and herd life evaluations from such records may not extend the generation interval when such an evaluation is used in bull sire selection.

  5. Ratios of internal conversion coefficients

    SciTech Connect

    Raman, S.; Ertugrul, M.; Nestor, C.W. . E-mail: CNestorjr@aol.com; Trzhaskovskaya, M.B.

    2006-03-15

    We present here a database of available experimental ratios of internal conversion coefficients for different atomic subshells measured with an accuracy of 10% or better for a number of elements in the range 26 {<=} Z {<=} 100. The experimental set involves 414 ratios for pure and 1096 ratios for mixed-multipolarity nuclear transitions in the transition energy range from 2 to 2300 keV. We give relevant theoretical ratios calculated in the framework of the Dirac-Fock method with and without regard for the hole in the atomic subshell after conversion. For comparison, the ratios obtained within the relativistic Hartree-Fock-Slater approximation are also presented. In cases where several ratios were measured for the same transition in a given isotope in which two multipolarities were involved, we present the mixing ratio {delta} {sup 2} obtained by a least squares fit.

  6. Adjustable Induction-Heating Coil

    NASA Technical Reports Server (NTRS)

    Ellis, Rod; Bartolotta, Paul

    1990-01-01

    Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.

  7. Time-adjusted variable resistor

    NASA Technical Reports Server (NTRS)

    Heyser, R. C.

    1972-01-01

    Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.

  8. 78 FR 62712 - Rate Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... noticing a recent Postal Service filing seeking postal rate adjustments based on exigent circumstances...,'' is ``premised on the recent recession as an exigent event.'' Id. at 1, 2. In Order No. 1059,...

  9. Operational Bayesian GLS Regression for Regional Hydrologic Analyses

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2004-05-01

    Reis et al. (2003) introduced a Bayesian approach to Generalized Least Squares (GLS) regression which has several advantages over the method-of-moments (MM) and maximum likelihood estimators (MLEs) proposed by Stedinger and Tasker (1985). The Bayesian approach provides both a measure of precision of the model error variance that MM and MLEs lack, and a more reasonable description of the possible values of the model error variance, in cases where the MLE and MOM model error variance estimator is zero or nearly zero. This study further develops the quasi-analytic Bayesian regression model into a practical Generalized Least Squares (GLS) regional hydrologic regression methodology able to address estimation of flood quantiles, regional shape parameters, and other statistics. The paper also explores regression diagnostic statistics for Weighted Least Squares (WLS) and GLS models, including a pseudo coefficient of determination R2 that can be used for model evaluation, and leverage and influence that are proposed to identify rogue observations, address lack of fit, and to support gauge network design. Regionalization of the shape parameter of the Log-Pearson Type III distribution is attempted using data for the Illinois River basin, and the State of South Carolina. Results obtained from Ordinary Least Squares (OLS), WLS, and GLS regional regression procedures were compared, as well as the results from the Bayesian and method-of-moments estimators. The OLS results are misleading because they do not make any distinction between the variance due to the model error and the variance due to time sampling error. Both of these examples demonstrate that the true model error variance for regional skew models is on the order of 0.10 or less. Leverage and influence statistics were very useful in identifying stations that could or did have a significant impact on the analysis. Reis, D. S., Jr., J.R. Stedinger, and E.S. Martins, Bayesian GLS Regression with application to LP3 Regional

  10. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  11. Adjustment of genomic waves in signal intensities from whole-genome SNP genotyping platforms.

    PubMed

    Diskin, Sharon J; Li, Mingyao; Hou, Cuiping; Yang, Shuzhang; Glessner, Joseph; Hakonarson, Hakon; Bucan, Maja; Maris, John M; Wang, Kai

    2008-11-01

    Whole-genome microarrays with large-insert clones designed to determine DNA copy number often show variation in hybridization intensity that is related to the genomic position of the clones. We found these 'genomic waves' to be present in Illumina and Affymetrix SNP genotyping arrays, confirming that they are not platform-specific. The causes of genomic waves are not well-understood, and they may prevent accurate inference of copy number variations (CNVs). By measuring DNA concentration for 1444 samples and by genotyping the same sample multiple times with varying DNA quantity, we demonstrated that DNA quantity correlates with the magnitude of waves. We further showed that wavy signal patterns correlate best with GC content, among multiple genomic features considered. To measure the magnitude of waves, we proposed a GC-wave factor (GCWF) measure, which is a reliable predictor of DNA quantity (correlation coefficient = 0.994 based on samples with serial dilution). Finally, we developed a computational approach by fitting regression models with GC content included as a predictor variable, and we show that this approach improves the accuracy of CNV detection. With the wide application of whole-genome SNP genotyping techniques, our wave adjustment method will be important for taking full advantage of genotyped samples for CNV analysis.

  12. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  13. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  14. Reliable Change Indices and Standardized Regression-Based Change Score Norms for Evaluating Neuropsychological Change in Children with Epilepsy

    PubMed Central

    Busch, Robyn M.; Lineweaver, Tara T.; Ferguson, Lisa; Haut, Jennifer S.

    2015-01-01

    Reliable change index scores (RCIs) and standardized regression-based change score norms (SRBs) permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRBs for use in children with epilepsy. Sixty-three children with epilepsy (age range 6–16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice adjusted RCIs and SRBs were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children’s Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. RCIs and SRBs for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRBs for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An excel sheet to perform all relevant calculations is also available to interested clinicians or researchers. PMID:26043163

  15. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    PubMed

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers.

  16. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  17. To adjust or not to adjust for baseline when analyzing repeated binary responses? The case of complete data when treatment comparison at study end is of interest.

    PubMed

    Jiang, Honghua; Kulkarni, Pandurang M; Mallinckrodt, Craig H; Shurzinske, Linda; Molenberghs, Geert; Lipkovich, Ilya

    2015-01-01

    The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi-square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi-squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. PMID:25866149

  18. A comparison of regression and regression-kriging for soil characterization using remote sensing imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...

  19. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  20. Detection of melamine in milk powders using Near-Infrared Hyperspectral imaging combined with regression coefficient of partial least square regression model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immun...

  1. Regression of altitude-produced cardiac hypertrophy.

    NASA Technical Reports Server (NTRS)

    Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.

    1973-01-01

    The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.

  2. Osmotic coefficients of aqueous weak electrolyte solutions: influence of dissociation on data reduction and modeling.

    PubMed

    Reschke, Thomas; Naeem, Shahbaz; Sadowski, Gabriele

    2012-06-28

    The experimental determination and modeling of osmotic coefficients in electrolyte solutions requires knowledge of the stoichiometric coefficient ν(i). In contrast to strong electrolytes, weak electrolytes exhibit a concentration-dependent stoichiometric coefficient, which directly influences the thermodynamic properties (e.g., osmotic coefficients). Neglecting this concentration dependence leads to erroneous osmotic coefficients for solutions of weak electrolytes. In this work, the concentration dependence of the stoichiometric coefficients and the influence of concentration on the osmotic coefficient data were accounted for by considering the dissociation equilibria of aqueous sulfuric and phosphoric acid systems. The dissociation equilibrium was combined with the ePC-SAFT equation of state to model osmotic coefficients and densities of electrolyte solutions. Without the introduction of any additional adjustable parameters, the average relative deviation between the modeled and the experimental data decreases from 12.82% to 4.28% (osmotic coefficients) and from 2.59% to 0.89% (densities) for 12 phosphoric and sulfuric systems compared to calculations that do not account for speciation. For easy access to the concentration-dependent stoichiometric coefficient, estimation schemes were formulated for mono-, di-, and triprotic acids and their salts.

  3. Adjustment of directly measured adipose tissue volume in infants

    PubMed Central

    Gale, C; Santhakumaran, S; Wells, J C K; Modi, N

    2014-01-01

    Background: Direct measurement of adipose tissue (AT) using magnetic resonance imaging is increasingly used to characterise infant body composition. Optimal techniques for adjusting direct measures of infant AT remain to be determined. Objectives: To explore the relationships between body size and direct measures of total and regional AT, the relationship between AT depots representing the metabolic load of adiposity and to determine optimal methods of adjusting adiposity in early life. Design: Analysis of regional AT volume (ATV) measured using magnetic resonance imaging in longitudinal and cross-sectional studies. Subjects: Healthy term infants; 244 in the first month (1–31 days), 72 in early infancy (42–91 days). Methods: The statistical validity of commonly used indices adjusting adiposity for body size was examined. Valid indices, defined as mathematical independence of the index from its denominator, to adjust ATV for body size and metabolic load of adiposity were determined using log-log regression analysis. Results: Indices commonly used to adjust ATV are significantly correlated with body size. Most regional AT depots are optimally adjusted using the index ATV/(height)3 in the first month and ATV/(height)2 in early infancy. Using these indices, height accounts for<2% of the variation in the index for almost all AT depots. Internal abdominal (IA) ATV was optimally adjusted for subcutaneous abdominal (SCA) ATV by calculating IA/SCA0.6. Conclusions: Statistically optimal indices for adjusting directly measured ATV for body size are ATV/height3 in the neonatal period and ATV/height2 in early infancy. The ratio IA/SCA ATV remains significantly correlated with SCA in both the neonatal period and early infancy; the index IA/SCA0.6 is statistically optimal at both of these ages. PMID:24662695

  4. Does an active adjustment of aerodynamic drag make sense?

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marek

    2016-09-01

    The article concerns evaluation of the possible impact of the gap between the tractor and semitrailer on the aerodynamic drag coefficient. The aim here is not to adjust this distance depending on the geometrical shape of the tractor and trailer, but depending solely on the speed of articulated vehicle. All the tests have form of numerical simulations. The method of simulation is briefly explained in the article. It considers various issues such as the range and objects of tests as well as the test conditions. The initial (pre-adaptive) and final (after adaptation process) computational meshes have been presented as illustrations. Some of the results have been presented in the form of run chart showing the change of value of aerodynamic drag coefficients in time, for different geometric configurations defined by a clearance gap between the tractor and semitrailer. The basis for a detailed analysis and conclusions were the averaged (in time) aerodynamic drag coefficients as a function of the clearance gap.

  5. Note on Two Generalizations of Coefficient Alpha.

    ERIC Educational Resources Information Center

    Raju, Nambury S.

    1979-01-01

    An important relationship is given for two generalizations of coefficient alpha: (1) Rajaratnam, Cronbach, and Gleser's generalizability formula for stratified-parallel tests, and (2) Raju's coefficient beta. (Author/CTM)

  6. Regression models of monthly water-level change in and near the Closed Basin Division of the San Luis Valley, south-central Colorado

    USGS Publications Warehouse

    Watts, Kenneth R.

    1995-01-01

    regression models. These models also include an autoregressive term to account for serial correlation in the residuals. The adjusted coefficient of determination (Ra2) for the 46 regression models range from 0.08 to 0.89, and the standard errors of estimate range from 0.034 to 2.483 feet. The regression models of monthly water- level change can be used to evaluate whether post-1985 monthly water-level change values at the selected observation wells are within the 95-percent confidence limits of predicted monthly water-level change.

  7. REGRESSION ESTIMATES FOR TOPOLOGICAL-HYDROGRAPH INPUT.

    USGS Publications Warehouse

    Karlinger, Michael R.; Guertin, D. Phillip; Troutman, Brent M.

    1988-01-01

    Physiographic, hydrologic, and rainfall data from 18 small drainage basins in semiarid, central Wyoming were used to calibrate topological, unit-hydrograph models for celerity, the average rate of travel of a flood wave through the basin. The data set consisted of basin characteristics and hydrologic data for the 18 basins and rainfall data for 68 storms. Calibrated values of celerity and peak discharges subsequently were regressed as a function of the basin characteristics and excess rainfall volume. Predicted values obtained in this way can be used as input for estimating hydrographs in ungaged basins. The regression models included ordinary least-squares and seemingly unrelated regression. This latter regression model jointly estimated the celerity and peak discharge.

  8. TWSVR: Regression via Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh

    2016-02-01

    Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets.

  9. TWSVR: Regression via Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Goyal, Keshav; Chandra, Suresh

    2016-02-01

    Taking motivation from Twin Support Vector Machine (TWSVM) formulation, Peng (2010) attempted to propose Twin Support Vector Regression (TSVR) where the regressor is obtained via solving a pair of quadratic programming problems (QPPs). In this paper we argue that TSVR formulation is not in the true spirit of TWSVM. Further, taking motivation from Bi and Bennett (2003), we propose an alternative approach to find a formulation for Twin Support Vector Regression (TWSVR) which is in the true spirit of TWSVM. We show that our proposed TWSVR can be derived from TWSVM for an appropriately constructed classification problem. To check the efficacy of our proposed TWSVR we compare its performance with TSVR and classical Support Vector Regression(SVR) on various regression datasets. PMID:26624223

  10. A comparison of confounding adjustment methods with an application to early life determinants of childhood obesity

    PubMed Central

    Kleinman, Ken; Gillman, Matthew W.

    2014-01-01

    We implemented 6 confounding adjustment methods: 1) covariate-adjusted regression, 2) propensity score (PS) regression, 3) PS stratification, 4) PS matching with two calipers, 5) inverse-probability-weighting, and 6) doubly-robust estimation to examine the associations between the BMI z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding versus formula only (N = 437) and cesarean section versus vaginal delivery (N = 1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were −0.33 (95% CI −0.53, −0.13) and −0.24 (−0.46, −0.02), respectively. The other approaches resulted in smaller N (204 to 276) because of poor overlap of covariates, but CIs were of similar width except for inverse-probability-weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from −0.01 to −0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142

  11. Soccer Ball Lift Coefficients via Trajectory Analysis

    ERIC Educational Resources Information Center

    Goff, John Eric; Carre, Matt J.

    2010-01-01

    We performed experiments in which a soccer ball was launched from a machine while two high-speed cameras recorded portions of the trajectory. Using the trajectory data and published drag coefficients, we extracted lift coefficients for a soccer ball. We determined lift coefficients for a wide range of spin parameters, including several spin…

  12. M-Bonomial Coefficients and Their Identities

    ERIC Educational Resources Information Center

    Asiru, Muniru A.

    2010-01-01

    In this note, we introduce M-bonomial coefficients or (M-bonacci binomial coefficients). These are similar to the binomial and the Fibonomial (or Fibonacci-binomial) coefficients and can be displayed in a triangle similar to Pascal's triangle from which some identities become obvious.

  13. Note on Methodology: The Coefficient of Variation.

    ERIC Educational Resources Information Center

    Sheret, Michael

    1984-01-01

    Addresses applications of the coefficient of variation as a measure of educational inequality or as a means of measuring changes of inequality status. Suggests the Gini coefficient has many advantages over the coefficient of variation since it can be used with the Lorenz curve (Lorenz provides detail Gini omits). (BRR)

  14. Is the G Index a Correlation Coefficient?

    ERIC Educational Resources Information Center

    Vegelius, Jan

    1980-01-01

    One argument against the G index is that, unlike phi, it is not a correlation coefficient; yet, G conforms to the Kendall and E-coefficient definitions. The G index is also equal to the Pearson product moment correlation coefficient obtained from double scoring. (Author/CP)

  15. 7 CFR 251.7 - Formula adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Formula adjustments. 251.7 Section 251.7 Agriculture... GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION THE EMERGENCY FOOD ASSISTANCE PROGRAM § 251.7 Formula adjustments. Formula adjustments. (a) Commodity adjustments. The Department will make annual adjustments...

  16. 12 CFR 1209.80 - Inflation adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...

  17. 12 CFR 1209.80 - Inflation adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...

  18. 12 CFR 1209.80 - Inflation adjustments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...

  19. There is No Quantum Regression Theorem

    SciTech Connect

    Ford, G.W.; OConnell, R.F.

    1996-07-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. {copyright} {ital 1996 The American Physical Society.}

  20. Modeling Heterogeneity in Relationships between Initial Status and Rates of Change: Treating Latent Variable Regression Coefficients as Random Coefficients in a Three-Level Hierarchical Model

    ERIC Educational Resources Information Center

    Choi, Kilchan; Seltzer, Michael

    2010-01-01

    In studies of change in education and numerous other fields, interest often centers on how differences in the status of individuals at the start of a period of substantive interest relate to differences in subsequent change. In this article, the authors present a fully Bayesian approach to estimating three-level Hierarchical Models in which latent…

  1. Synthesizing regression results: a factored likelihood method.

    PubMed

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-06-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653

  2. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  3. Recovery of spectral data using weighted canonical correlation regression

    NASA Astrophysics Data System (ADS)

    Eslahi, Niloofar; Amirshahi, Seyed Hossein; Agahian, Farnaz

    2009-05-01

    The weighted canonical correlation regression technique is employed for reconstruction of reflectance spectra of surface colors from the related XYZ tristimulus values of samples. Flexible input data based on applying certain weights to reflectance and colorimetric values of Munsell color chips has been implemented for each particular sample which belongs to Munsell or GretagMacbeth Colorchecker DC color samples. In fact, the colorimetric and spectrophotometric data of Munsell chips are selected as fundamental bases and the color difference values between the target and samples in Munsell dataset are chosen as a criterion for determination of weighting factors. The performance of the suggested method is evaluated in spectral reflectance reconstruction. The results show considerable improvements in terms of root mean square error (RMS) and goodness-of-fit coefficient (GFC) between the actual and reconstructed reflectance curves as well as CIELAB color difference values under illuminants A and TL84 for CIE1964 standard observer.

  4. Regression of white spot enamel lesions. A new optical method for quantitative longitudinal evaluation in vivo.

    PubMed

    Ogaard, B; Ten Bosch, J J

    1994-09-01

    This article describes a new nondestructive optical method for evaluation of lesion regression in vivo. White spot caries lesions were induced with orthodontic bands in two vital premolars of seven patients. The teeth were banded for 4 weeks with special orthodontic bands that allowed plaque accumulation on the buccal surface. The teeth were left in the dentition for 2 or 4 weeks after debanding. Regular oral hygiene with a nonfluoridated toothpaste was applied during the entire experimental period. The optical scattering coefficient of the banded area was measured before banding and in 1-week intervals thereafter. The scattering coefficient returned to the sound value in an exponential manner, the half-value-time for left teeth being 1.1 week, for right teeth 1.8 weeks, these values being significantly inequal (p = 0.035). At the start of the regression period, the scattering coefficient of left teeth lesions was 2.5 as high as of right teeth lesions, values being inequal with p = 0.09. It is concluded that regression of initial lesions in the presence of saliva is a relatively rapid process. The new optical method may be of clinical importance for quantitative evaluation of enamel lesion regression developed during fixed appliance therapy.

  5. Poisson Regression Analysis of Illness and Injury Surveillance Data

    SciTech Connect

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra

  6. Comparison of sire breed solutions for growth traits adjusted by mean expected progeny differences to a 1993 base.

    PubMed

    Barkhouse, K L; Van Vleck, L D; Cundiff, L V; Buchanan, D S; Marshall, D M

    1998-09-01

    Records on growth traits were obtained from five Midwestern agricultural experiment stations as part of a beef cattle crossbreeding project (NC-196). Records on birth weight (BWT, n =3,490), weaning weight (WWT, n = 3,237), and yearling weight (YWT, n = 1,372) were analyzed within locations and pooled across locations to obtain estimates of breed of sire differences. Solutions for breed of sire differences were adjusted to the common base year of 1993. Then, factors to use with within-breed expected progeny differences (EPD) to obtain across-breed EPD were calculated. These factors were compared with factors obtained from similar analyses of records from the U. S. Meat Animal Research Center (MARC). Progeny of Brahman sires mated to Bos taurus cows were heaviest at birth and among the lightest at weaning. Simmental and Gelbvieh sires produced the heaviest progeny at weaning. Estimates of heritability pooled across locations were .34, .19, and .07 for BWT, WWT, and YWT, respectively. Regression coefficients of progeny performance on EPD of sire were 1.25+/-.09, .98+/-.13, and .62+/-.18 for BWT, WWT, and YWT, respectively. Rankings of breeds of sire generally did not change when adjusted for sire sampling. Rankings were generally similar to those previously reported for MARC data, except for Limousin and Charolais sires, which ranked lower for BWT and WWT at NC-196 locations than at MARC. Adjustment factors used to obtain across-breed EPD were largest for Brahman for BWT and for Gelbvieh for WWT. The data for YWT allow only comparison of Angus with Simmental and of Gelbvieh with Limousin. PMID:9781484

  7. Time course for tail regression during metamorphosis of the ascidian Ciona intestinalis.

    PubMed

    Matsunobu, Shohei; Sasakura, Yasunori

    2015-09-01

    In most ascidians, the tadpole-like swimming larvae dramatically change their body-plans during metamorphosis and develop into sessile adults. The mechanisms of ascidian metamorphosis have been researched and debated for many years. Until now information on the detailed time course of the initiation and completion of each metamorphic event has not been described. One dramatic and important event in ascidian metamorphosis is tail regression, in which ascidian larvae lose their tails to adjust themselves to sessile life. In the present study, we measured the time associated with tail regression in the ascidian Ciona intestinalis. Larvae are thought to acquire competency for each metamorphic event in certain developmental periods. We show that the timing with which the competence for tail regression is acquired is determined by the time since hatching, and this timing is not affected by the timing of post-hatching events such as adhesion. Because larvae need to adhere to substrates with their papillae to induce tail regression, we measured the duration for which larvae need to remain adhered in order to initiate tail regression and the time needed for the tail to regress. Larvae acquire the ability to adhere to substrates before they acquire tail regression competence. We found that when larvae adhered before they acquired tail regression competence, they were able to remember the experience of adhesion until they acquired the ability to undergo tail regression. The time course of the events associated with tail regression provides a valuable reference, upon which the cellular and molecular mechanisms of ascidian metamorphosis can be elucidated.

  8. Reconstruction of missing daily streamflow data using dynamic regression models

    NASA Astrophysics Data System (ADS)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  9. MCCB warm adjustment testing concept

    NASA Astrophysics Data System (ADS)

    Erdei, Z.; Horgos, M.; Grib, A.; Preradović, D. M.; Rodic, V.

    2016-08-01

    This paper presents an experimental investigation in to operating of thermal protection device behavior from an MCCB (Molded Case Circuit Breaker). One of the main functions of the circuit breaker is to assure protection for the circuits where mounted in for possible overloads of the circuit. The tripping mechanism for the overload protection is based on a bimetal movement during a specific time frame. This movement needs to be controlled and as a solution to control this movement we choose the warm adjustment concept. This concept is meant to improve process capability control and final output. The warm adjustment device design will create a unique adjustment of the bimetal position for each individual breaker, determined when the testing current will flow thru a phase which needs to trip in a certain amount of time. This time is predetermined due to scientific calculation for all standard types of amperages and complies with the IEC 60497 standard requirements.

  10. Social Support, Self-Esteem, and Stress as Predictors of Adjustment to University among First-Year Undergraduates

    ERIC Educational Resources Information Center

    Friedlander, Laura J.; Reid, Graham J.; Shupak, Naomi; Cribbie, Robert

    2007-01-01

    The current study examined the joint effects of stress, social support, and self-esteem on adjustment to university. First-year undergraduate students (N = 115) were assessed during the first semester and again 10 weeks later, during the second semester of the academic year. Multiple regressions predicting adjustment to university from perceived…

  11. A comparison of several regression models for analysing cost of CABG surgery.

    PubMed

    Austin, Peter C; Ghali, William A; Tu, Jack V

    2003-09-15

    Investigators in clinical research are often interested in determining the association between patient characteristics and cost of medical or surgical treatment. However, there is no uniformly agreed upon regression model with which to analyse cost data. The objective of the current study was to compare the performance of linear regression, linear regression with log-transformed cost, generalized linear models with Poisson, negative binomial and gamma distributions, median regression, and proportional hazards models for analysing costs in a cohort of patients undergoing CABG surgery. The study was performed on data comprising 1959 patients who underwent CABG surgery in Calgary, Alberta, between June 1994 and March 1998. Ten of 21 patient characteristics were significantly associated with cost of surgery in all seven models. Eight variables were not significantly associated with cost of surgery in all seven models. Using mean squared prediction error as a loss function, proportional hazards regression and the three generalized linear models were best able to predict cost in independent validation data. Using mean absolute error, linear regression with log-transformed cost, proportional hazards regression, and median regression to predict median cost, were best able to predict cost in independent validation data. Since the models demonstrated good consistency in identifying factors associated with increased cost of CABG surgery, any of the seven models can be used for identifying factors associated with increased cost of surgery. However, the magnitude of, and the interpretation of, the coefficients vary across models. Researchers are encouraged to consider a variety of candidate models, including those better known in the econometrics literature, rather than begin data analysis with one regression model selected a priori. The final choice of regression model should be made after a careful assessment of how best to assess predictive ability and should be tailored to

  12. A comparison of several regression models for analysing cost of CABG surgery.

    PubMed

    Austin, Peter C; Ghali, William A; Tu, Jack V

    2003-09-15

    Investigators in clinical research are often interested in determining the association between patient characteristics and cost of medical or surgical treatment. However, there is no uniformly agreed upon regression model with which to analyse cost data. The objective of the current study was to compare the performance of linear regression, linear regression with log-transformed cost, generalized linear models with Poisson, negative binomial and gamma distributions, median regression, and proportional hazards models for analysing costs in a cohort of patients undergoing CABG surgery. The study was performed on data comprising 1959 patients who underwent CABG surgery in Calgary, Alberta, between June 1994 and March 1998. Ten of 21 patient characteristics were significantly associated with cost of surgery in all seven models. Eight variables were not significantly associated with cost of surgery in all seven models. Using mean squared prediction error as a loss function, proportional hazards regression and the three generalized linear models were best able to predict cost in independent validation data. Using mean absolute error, linear regression with log-transformed cost, proportional hazards regression, and median regression to predict median cost, were best able to predict cost in independent validation data. Since the models demonstrated good consistency in identifying factors associated with increased cost of CABG surgery, any of the seven models can be used for identifying factors associated with increased cost of surgery. However, the magnitude of, and the interpretation of, the coefficients vary across models. Researchers are encouraged to consider a variety of candidate models, including those better known in the econometrics literature, rather than begin data analysis with one regression model selected a priori. The final choice of regression model should be made after a careful assessment of how best to assess predictive ability and should be tailored to

  13. Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk

    PubMed Central

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323

  14. Inbreeding coefficients and coalescence times.

    PubMed

    Slatkin, M

    1991-10-01

    This paper describes the relationship between probabilities of identity by descent and the distribution of coalescence times. By using the relationship between coalescence times and identity probabilities, it is possible to extend existing results for inbreeding coefficients in regular systems of mating to find the distribution of coalescence times and the mean coalescence times. It is also possible to express Sewall Wright's FST as the ratio of average coalescence times of different pairs of genes. That simplifies the analysis of models of subdivided populations because the average coalescence time can be found by computing separately the time it takes for two genes to enter a single subpopulation and time it takes for two genes in the same subpopulation to coalesce. The first time depends only on the migration matrix and the second time depends only on the total number of individuals in the population. This approach is used to find FST in the finite island model and in one- and two-dimensional stepping-stone models. It is also used to find the rate of approach of FST to its equilibrium value. These results are discussed in terms of different measures of genetic distance. It is proposed that, for the purposes of describing the amount of gene flow among local populations, the effective migration rate between pairs of local populations, M, which is the migration rate that would be estimated for those two populations if they were actually in an island model, provides a simple and useful measure of genetic similarity that can be defined for either allozyme or DNA sequence data.

  15. Within-herd heritability estimated with daughter-parent regression for yield and somatic cell score.

    PubMed

    Dechow, C D; Norman, H D

    2007-01-01

    Estimates of heritability within herd (h(WH)(2) ) that were generated with daughter-dam regression, daughter-sire regression, and REML were compared, and effects of adjusting lactation records for within-herd heritability on genetic evaluations were evaluated. Holstein records for milk, fat, and protein yields and somatic cell score (SCS) from the USDA national database represented herds in the US Northeast, Southeast, Midwest, and West. Four data subsets (457 to 499 herds) were randomly selected, and a large-herd subset included the 15 largest herds from the West and 10 largest herds from other regions. Subset heritabilities for yield and SCS were estimated assuming a regression model that included fixed covariates for effects of dam yield or SCS, sire predicted transmitting ability (PTA) for yield or SCS, herd-year-season of calving, and age within parity. Dam records and sire PTA were nested within herd as random covariates to generate within-herd heritability estimates that were regressed toward mean h(WH)(2) for the random subset. Heritabilities were estimated with REML using sire models (REML(SIRE)), sire-maternal grandsire models (REML(MGS)), and animal models (REML(ANIM)) for each herd individually in the large-herd subset. Phenotypic variance for each herd was estimated from herd residual variance after adjusting for effects of year-season and age within parity. Deviations from herd-year-season mean were standardized to constant genetic variance across herds, and records were weighted according to estimated error variance to accommodate h(WH)(2) when estimating breeding values. Mean h(WH)(2) tended to be higher with daughter-dam regression (0.35 for milk yield) than with daughter-sire regression (0.24 for milk yield). Heritability estimates varied widely across herds (0.04 to 0.67 for milk yield estimated with daughter-dam regression), and h(WH)(2) deviated from subset means more for large herds than for small herds. Correlation with REML(ANIM) h(WH)(2

  16. Testing of a Fiber Optic Wear, Erosion and Regression Sensor

    NASA Technical Reports Server (NTRS)

    Korman, Valentin; Polzin, Kurt A.

    2011-01-01

    The nature of the physical processes and harsh environments associated with erosion and wear in propulsion environments makes their measurement and real-time rate quantification difficult. A fiber optic sensor capable of determining the wear (regression, erosion, ablation) associated with these environments has been developed and tested in a number of different applications to validate the technique. The sensor consists of two fiber optics that have differing attenuation coefficients and transmit light to detectors. The ratio of the two measured intensities can be correlated to the lengths of the fiber optic lines, and if the fibers and the host parent material in which they are embedded wear at the same rate the remaining length of fiber provides a real-time measure of the wear process. Testing in several disparate situations has been performed, with the data exhibiting excellent qualitative agreement with the theoretical description of the process and when a separate calibrated regression measurement is available good quantitative agreement is obtained as well. The light collected by the fibers can also be used to optically obtain the spectra and measure the internal temperature of the wear layer.

  17. THE REGRESSION MODEL OF IRAN LIBRARIES ORGANIZATIONAL CLIMATE

    PubMed Central

    Jahani, Mohammad Ali; Yaminfirooz, Mousa; Siamian, Hasan

    2015-01-01

    Background: The purpose of this study was to drawing a regression model of organizational climate of central libraries of Iran’s universities. Methods: This study is an applied research. The statistical population of this study consisted of 96 employees of the central libraries of Iran’s public universities selected among the 117 universities affiliated to the Ministry of Health by Stratified Sampling method (510 people). Climate Qual localized questionnaire was used as research tools. For predicting the organizational climate pattern of the libraries is used from the multivariate linear regression and track diagram. Results: of the 9 variables affecting organizational climate, 5 variables of innovation, teamwork, customer service, psychological safety and deep diversity play a major role in prediction of the organizational climate of Iran’s libraries. The results also indicate that each of these variables with different coefficient have the power to predict organizational climate but the climate score of psychological safety (0.94) plays a very crucial role in predicting the organizational climate. Track diagram showed that five variables of teamwork, customer service, psychological safety, deep diversity and innovation directly effects on the organizational climate variable that contribution of the team work from this influence is more than any other variables. Conclusions: Of the indicator of the organizational climate of climateQual, the contribution of the team work from this influence is more than any other variables that reinforcement of teamwork in academic libraries can be more effective in improving the organizational climate of this type libraries. PMID:26622203

  18. Evaluating Geographically Weighted Regression Models for Environmental Chemical Risk Analysis

    PubMed Central

    Czarnota, Jenna; Wheeler, David C; Gennings, Chris

    2015-01-01

    In the evaluation of cancer risk related to environmental chemical exposures, the effect of many correlated chemicals on disease is often of interest. The relationship between correlated environmental chemicals and health effects is not always constant across a study area, as exposure levels may change spatially due to various environmental factors. Geographically weighted regression (GWR) has been proposed to model spatially varying effects. However, concerns about collinearity effects, including regression coefficient sign reversal (ie, reversal paradox), may limit the applicability of GWR for environmental chemical risk analysis. A penalized version of GWR, the geographically weighted lasso, has been proposed to remediate the collinearity effects in GWR models. Our focus in this study was on assessing through a simulation study the ability of GWR and GWL to correctly identify spatially varying chemical effects for a mixture of correlated chemicals within a study area. Our results showed that GWR suffered from the reversal paradox, while GWL overpenalized the effects for the chemical most strongly related to the outcome. PMID:25983546

  19. Robust visual tracking via speedup multiple kernel ridge regression

    NASA Astrophysics Data System (ADS)

    Qian, Cheng; Breckon, Toby P.; Li, Hui

    2015-09-01

    Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.

  20. Convective adjustment in baroclinic atmospheres

    NASA Technical Reports Server (NTRS)

    Emanuel, Kerry A.

    1986-01-01

    Local convection in planetary atmospheres is generally considered to result from the action of gravity on small regions of anomalous density. That in rotating baroclinic fluids the total potential energy for small scale convection contains a centrifugal as well as a gravitational contribution is shown. Convective adjustment in such an atmosphere results in the establishment of near adiabatic lapse rates of temperature along suitably defined surfaces of constant angular momentum, rather than in the vertical. This leads in general to sub-adiabatic vertical lapse rates. That such an adjustment actually occurs in the earth's atmosphere is shown by example and the magnitude of the effect for several other planetary atmospheres is estimated.

  1. Can Quiet Standing Posture Predict Compensatory Postural Adjustment?

    PubMed Central

    Moya, Gabriel Bueno Lahóz; Siqueira, Cássio Marinho; Caffaro, Renê Rogieri; Fu, Carolina; Tanaka, Clarice

    2009-01-01

    OBJECTIVE The aim of this study was to analyze whether quiet standing posture is related to compensatory postural adjustment. INTRODUCTION The latest data in clinical practice suggests that static posture may play a significant role in musculoskeletal function, even in dynamic activities. However, no evidence exists regarding whether static posture during quiet standing is related to postural adjustment. METHODS Twenty healthy participants standing on a movable surface underwent unexpected, standardized backward and forward postural perturbations while kinematic data were acquired; ankle, knee, pelvis and trunk positions were then calculated. An initial and a final video frame representing quiet standing posture and the end of the postural perturbation were selected in such a way that postural adjustments had occurred between these frames. The positions of the body segments were calculated in these initial and final frames, together with the displacement of body segments during postural adjustments between the initial and final frames. The relationship between the positions of body segments in the initial and final frames and their displacements over this time period was analyzed using multiple regressions with a significance level of p ≤ 0.05. RESULTS We failed to identify a relationship between the position of the body segments in the initial and final frames and the associated displacement of the body segments. DISCUSSION The motion pattern during compensatory postural adjustment is not related to quiet standing posture or to the final posture of compensatory postural adjustment. This fact should be considered when treating balance disturbances and musculoskeletal abnormalities. CONCLUSION Static posture cannot predict how body segments will behave during compensatory postural adjustment. PMID:19690665

  2. An empirical evaluation of spatial regression models

    NASA Astrophysics Data System (ADS)

    Gao, Xiaolu; Asami, Yasushi; Chung, Chang-Jo F.

    2006-10-01

    Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.

  3. Response-adaptive regression for longitudinal data.

    PubMed

    Wu, Shuang; Müller, Hans-Georg

    2011-09-01

    We propose a response-adaptive model for functional linear regression, which is adapted to sparsely sampled longitudinal responses. Our method aims at predicting response trajectories and models the regression relationship by directly conditioning the sparse and irregular observations of the response on the predictor, which can be of scalar, vector, or functional type. This obliterates the need to model the response trajectories, a task that is challenging for sparse longitudinal data and was previously required for functional regression implementations for longitudinal data. The proposed approach turns out to be superior compared to previous functional regression approaches in terms of prediction error. It encompasses a variety of regression settings that are relevant for the functional modeling of longitudinal data in the life sciences. The improved prediction of response trajectories with the proposed response-adaptive approach is illustrated for a longitudinal study of Kiwi weight growth and by an analysis of the dynamic relationship between viral load and CD4 cell counts observed in AIDS clinical trials. PMID:21133880

  4. Mental chronometry with simple linear regression.

    PubMed

    Chen, J Y

    1997-10-01

    Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry. PMID:9347535

  5. Hierarchical regression for analyses of multiple outcomes.

    PubMed

    Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao

    2015-09-01

    In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395

  6. MULTILINEAR TENSOR REGRESSION FOR LONGITUDINAL RELATIONAL DATA

    PubMed Central

    Hoff, Peter D.

    2016-01-01

    A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. PMID:27458495

  7. Adjustable-Angle Drill Block

    NASA Technical Reports Server (NTRS)

    Gallimore, F. H.

    1986-01-01

    Adjustable angular drill block accurately transfers hole patterns from mating surfaces not normal to each other. Block applicable to transfer of nonperpendicular holes in mating contoured assemblies in aircraft industry. Also useful in general manufacturing to transfer mating installation holes to irregular and angular surfaces.

  8. Economic Pressures and Family Adjustment.

    ERIC Educational Resources Information Center

    Haccoun, Dorothy Markiewicz; Ledingham, Jane E.

    The relationships between economic stress on the family and child and parental adjustment were examined for a sample of 199 girls and boys in grades one, four, and seven. These associations were examined separately for families in which both parents were present and in which mothers only were at home. Economic stress was associated with boys'…

  9. Multiple regression approach to optimize drilling operations in the Arabian Gulf area

    SciTech Connect

    Al-Betairi, E.A.; Moussa, M.M.; Al-Otaibi, S.

    1988-03-01

    This paper reports a successful application of multiple regression analysis, supported by a detailed statistical study to verify the Bourgoyne and Young model. The model estimates the optimum penetration rate (ROP), weight on bit (WOB), and rotary speed under the effect of controllable and uncontrollable factors. Field data from three wells in the Arabian Gulf were used and emphasized the validity of this model. The model coefficients are sensitive to the number of points included. The correlation coefficients and multicollinearity sensitivity of each drilling parameter on the ROP are studied.

  10. Uncertainty quantification in DIC with Kriging regression

    NASA Astrophysics Data System (ADS)

    Wang, Dezhi; DiazDelaO, F. A.; Wang, Weizhuo; Lin, Xiaoshan; Patterson, Eann A.; Mottershead, John E.

    2016-03-01

    A Kriging regression model is developed as a post-processing technique for the treatment of measurement uncertainty in classical subset-based Digital Image Correlation (DIC). Regression is achieved by regularising the sample-point correlation matrix using a local, subset-based, assessment of the measurement error with assumed statistical normality and based on the Sum of Squared Differences (SSD) criterion. This leads to a Kriging-regression model in the form of a Gaussian process representing uncertainty on the Kriging estimate of the measured displacement field. The method is demonstrated using numerical and experimental examples. Kriging estimates of displacement fields are shown to be in excellent agreement with 'true' values for the numerical cases and in the experimental example uncertainty quantification is carried out using the Gaussian random process that forms part of the Kriging model. The root mean square error (RMSE) on the estimated displacements is produced and standard deviations on local strain estimates are determined.

  11. Uncertainties in Cancer Risk Coefficients for Environmental Exposure to Radionuclides. An Uncertainty Analysis for Risk Coefficients Reported in Federal Guidance Report No. 13

    SciTech Connect

    Pawel, David; Leggett, Richard Wayne; Eckerman, Keith F; Nelson, Christopher

    2007-01-01

    Federal Guidance Report No. 13 (FGR 13) provides risk coefficients for estimation of the risk of cancer due to low-level exposure to each of more than 800 radionuclides. Uncertainties in risk coefficients were quantified in FGR 13 for 33 cases (exposure to each of 11 radionuclides by each of three exposure pathways) on the basis of sensitivity analyses in which various combinations of plausible biokinetic, dosimetric, and radiation risk models were used to generate alternative risk coefficients. The present report updates the uncertainty analysis in FGR 13 for the cases of inhalation and ingestion of radionuclides and expands the analysis to all radionuclides addressed in that report. The analysis indicates that most risk coefficients for inhalation or ingestion of radionuclides are determined within a factor of 5 or less by current information. That is, application of alternate plausible biokinetic and dosimetric models and radiation risk models (based on the linear, no-threshold hypothesis with an adjustment for the dose and dose rate effectiveness factor) is unlikely to change these coefficients by more than a factor of 5. In this analysis the assessed uncertainty in the radiation risk model was found to be the main determinant of the uncertainty category for most risk coefficients, but conclusions concerning the relative contributions of risk and dose models to the total uncertainty in a risk coefficient may depend strongly on the method of assessing uncertainties in the risk model.

  12. A tutorial on Bayesian Normal linear regression

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens

    2015-12-01

    Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.

  13. Aerosol Angstrom Absorption Coefficient Comparisons during MILAGRO.

    NASA Astrophysics Data System (ADS)

    Marley, N. A.; Marchany-Rivera, A.; Kelley, K. L.; Mangu, A.; Gaffney, J. S.

    2007-12-01

    aerosol Angstrom absorption exponents by linear regression over the entire UV-visible spectral range. These results are compared to results obtained from the absorbance measurements obtained in the field. The differences in calculated Angstrom absorption exponents between the field and laboratory measurements are attributed partly to the differences in time resolution of the sample collection resulting in heavier particle pileup on the filter surface of the 12-hour samples. Some differences in calculated results can also be attributed to the presence of narrow band absorbers below 400 nm that do not fall in the wavelengths covered by the 7 wavelengths of the aethalometer. 1. Marley, N.A., J.S. Gaffney, J.C. Baird, C.A. Blazer, P.J. Drayton, and J.E. Frederick, "The determination of scattering and absorption coefficients of size-fractionated aerosols for radiative transfer calculations." Aerosol Sci. Technol., 34, 535-549, (2001). This work was conducted as part of the Department of Energy's Atmospheric Science Program as part of the Megacity Aerosol Experiment - Mexico City during MILAGRO. This research was supported by the Office of Science (BER), U.S. Department of Energy Grant No. DE-FG02-07ER64329. We also wish to thank Mexican Scientists and students for their assistance from the Instituto Mexicano de Petroleo (IMP) and CENICA.

  14. Salience Assignment for Multiple-Instance Regression

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Lane, Terran

    2007-01-01

    We present a Multiple-Instance Learning (MIL) algorithm for determining the salience of each item in each bag with respect to the bag's real-valued label. We use an alternating-projections constrained optimization approach to simultaneously learn a regression model and estimate all salience values. We evaluate this algorithm on a significant real-world problem, crop yield modeling, and demonstrate that it provides more extensive, intuitive, and stable salience models than Primary-Instance Regression, which selects a single relevant item from each bag.

  15. Spontaneous regression of a conjunctival naevus.

    PubMed

    Haldar, Shreya; Leyland, Martin

    2016-01-01

    Conjunctival naevi are one of the most common lesions affecting the conjunctiva. While benign in the vast majority of cases, the risk of malignant transformation necessitates regular follow-up. They are well known to increase in size; however, we present the first photo-documented case of spontaneous regression of conjunctival naevus. In most cases, surgical excision is performed due to the clinician's concerns over malignancy. However, a substantial proportion of patients request excision. Highlighting the potential for regression of the lesion is important to ensure patients make an informed decision when contemplating such surgery. PMID:27581234

  16. Removing Malmquist bias from linear regressions

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  17. Multicollinearity in cross-sectional regressions

    NASA Astrophysics Data System (ADS)

    Lauridsen, Jørgen; Mur, Jesùs

    2006-10-01

    The paper examines robustness of results from cross-sectional regression paying attention to the impact of multicollinearity. It is well known that the reliability of estimators (least-squares or maximum-likelihood) gets worse as the linear relationships between the regressors become more acute. We resolve the discussion in a spatial context, looking closely into the behaviour shown, under several unfavourable conditions, by the most outstanding misspecification tests when collinear variables are added to the regression. A Monte Carlo simulation is performed. The conclusions point to the fact that these statistics react in different ways to the problems posed.

  18. Spontaneous hypnotic age regression: case report.

    PubMed

    Spiegel, D; Rosenfeld, A

    1984-12-01

    Age regression--reliving the past as though it were occurring in the present, with age appropriate vocabulary, mental content, and affect--can occur with instruction in highly hypnotizable individuals, but has rarely been reported to occur spontaneously, especially as a primary symptom. The psychiatric presentation and treatment of a 16-year-old girl with spontaneous age regressions accessible and controllable with hypnosis and psychotherapy are described. Areas of overlap and divergence between this patient's symptoms and those found in patients with hysterical fugue and multiple personality syndrome are also discussed.

  19. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    NASA Astrophysics Data System (ADS)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  20. Comparison of two regression-based approaches for determining nutrient and sediment fluxes and trends in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth

    2012-01-01

    Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent

  1. Does religiosity help Muslims adjust to death?: a research note.

    PubMed

    Hossain, Mohammad Samir; Siddique, Mohammad Zakaria

    2008-01-01

    Death is the end of life. But Muslims believe death is an event between two lives, not an absolute cessation of life. Thus religiosity may influence Muslims differently about death. To explore the impact of religious perception, thus religiosity, a cross-sectional, descriptive, analytic and correlational study was conducted on 150 Muslims. Self-declared healthy Muslims equally from both sexes (N = 150, Age range--20 to 50 years, Minimum education--Bachelor) were selected by stratified sampling and randomly under each stratum. Subjects, divided in five levels of religiosity, were assessed and scored for the presence of maladjustment symptoms and stage of adjustment with death. ANOVA and correlation coefficient was applied on the sets of data collected. All statistical tests were done at the level of 95% confidence (P < 0.05). Final results were higher than the table values used for ANOVA and correlation coefficient yielded P values of < 0.05, < 0.01, and < 0.001. Religiosity as a criterion of Muslims influenced the quality of adjustment with death positively. So we hypothesized that religiosity may help Muslims adjust to death.

  2. Logistic regression when binary predictor variables are highly correlated.

    PubMed

    Barker, L; Brown, C

    Standard logistic regression can produce estimates having large mean square error when predictor variables are multicollinear. Ridge regression and principal components regression can reduce the impact of multicollinearity in ordinary least squares regression. Generalizations of these, applicable in the logistic regression framework, are alternatives to standard logistic regression. It is shown that estimates obtained via ridge and principal components logistic regression can have smaller mean square error than estimates obtained through standard logistic regression. Recommendations for choosing among standard, ridge and principal components logistic regression are developed. Published in 2001 by John Wiley & Sons, Ltd.

  3. Multilogistic regression by means of evolutionary product-unit neural networks.

    PubMed

    Hervás-Martínez, C; Martínez-Estudillo, F J; Carbonero-Ruz, M

    2008-09-01

    We propose a multilogistic regression model based on the combination of linear and product-unit models, where the product-unit nonlinear functions are constructed with the product of the inputs raised to arbitrary powers. The estimation of the coefficients of the model is carried out in two phases. First, the number of product-unit basis functions and the exponents' vector are determined by means of an evolutionary neural network algorithm. Afterwards, a standard maximum likelihood optimization method determines the rest of the coefficients in the new space given by the initial variables and the product-unit basis functions previously estimated. We compare the performance of our approach with the logistic regression built on the initial variables and several learning classification techniques. The statistical test carried out on twelve benchmark datasets shows that the proposed model is competitive in terms of the accuracy of the classifier.

  4. 20 CFR 345.118 - Adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... calendar year because of an error that does not constitute a compensation adjustment as defined in... compensation adjustment as defined in paragraph (b) of this section, the employer shall adjust the error by... compensation, proper adjustments with respect to the contributions shall be made, without interest,...

  5. 20 CFR 345.118 - Adjustments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... calendar year because of an error that does not constitute a compensation adjustment as defined in... compensation adjustment as defined in paragraph (b) of this section, the employer shall adjust the error by... compensation, proper adjustments with respect to the contributions shall be made, without interest,...

  6. Adjusting to University: The Hong Kong Experience

    ERIC Educational Resources Information Center

    Yau, Hon Keung; Sun, Hongyi; Cheng, Alison Lai Fong

    2012-01-01

    Students' adjustment to the university environment is an important factor in predicting university outcomes and is crucial to their future achievements. University support to students' transition to university life can be divided into three dimensions: academic adjustment, social adjustment and psychological adjustment. However, these…

  7. 12 CFR 19.240 - Inflation adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...

  8. 12 CFR 19.240 - Inflation adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...

  9. 12 CFR 19.240 - Inflation adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...

  10. Estimates of Multiple Correlation Coefficient Shrinkage.

    ERIC Educational Resources Information Center

    Cummings, Corenna C.

    The accuracy and variability of 4 cross-validation procedures and 18 formulas were compared concerning their ability to estimate the population multiple correlation and the validity of the sample regression equation in the population. The investigation included two types of regression, multiple and stepwise; three sample sizes, N = 30, 60, 120;…

  11. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases

    PubMed Central

    Sicras-Mainar, Antoni; Navarro-Artieda, Ruth; Blanca-Tamayo, Milagrosa; Velasco-Velasco, Soledad; Escribano-Herranz, Esperanza; Llopart-López, Josep Ramon; Violan-Fors, Concepción; Vilaseca-Llobet, Josep Maria; Sánchez-Fontcuberta, Encarna; Benavent-Areu, Jaume; Flor-Serra, Ferran; Aguado-Jodar, Alba; Rodríguez-López, Daniel; Prados-Torres, Alejandra; Estelrich-Bennasar, Jose

    2009-01-01

    Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution) of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit), dependent variables (mean number of visits, episodes and direct costs), co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System) and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others), referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits) costs of the variables for each patient attended/year (N = 350,000 inhabitants). The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50). The correlation between the efficiency (relative-weights) and synthetic (by centre and physician) indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000) will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression analysis (procedure

  12. Bootstrap inference longitudinal semiparametric regression model

    NASA Astrophysics Data System (ADS)

    Pane, Rahmawati; Otok, Bambang Widjanarko; Zain, Ismaini; Budiantara, I. Nyoman

    2016-02-01

    Semiparametric regression contains two components, i.e. parametric and nonparametric component. Semiparametric regression model is represented by yt i=μ (x˜'ti,zt i)+εt i where μ (x˜'ti,zt i)=x˜'tiβ ˜+g (zt i) and yti is response variable. It is assumed to have a linear relationship with the predictor variables x˜'ti=(x1 i 1,x2 i 2,…,xT i r) . Random error εti, i = 1, …, n, t = 1, …, T is normally distributed with zero mean and variance σ2 and g(zti) is a nonparametric component. The results of this study showed that the PLS approach on longitudinal semiparametric regression models obtain estimators β˜^t=[X'H(λ)X]-1X'H(λ )y ˜ and g˜^λ(z )=M (λ )y ˜ . The result also show that bootstrap was valid on longitudinal semiparametric regression model with g^λ(b )(z ) as nonparametric component estimator.

  13. Nodular fasciitis with degeneration and regression.

    PubMed

    Yanagisawa, Akihiro; Okada, Hideki

    2008-07-01

    Nodular fasciitis is a benign reactive proliferation that is frequently misdiagnosed as a sarcoma. This article describes a case of nodular fasciitis of 6-month duration located in the cheek, which degenerated and spontaneously regressed after biopsy. The nodule was fixed to the zygoma but was free from the overlying skin. The mass was 3.0 cm in diameter and demonstrated high signal intensity on T2-weighted magnetic resonance imaging. A small part of the lesion was biopsied. Pathological and immunohistochemical examinations identified the nodule as nodular fasciitis with myxoid histology. One month after the biopsy, the mass showed decreased signal intensity on T2-weighted images and measured 2.2 cm in size. The signal on T2-weighted images showed time-dependent decreases, and the mass continued to reduce in size throughout the follow-up period. The lesion presented as hypointense to the surrounding muscles on T2-weighted images and was 0.4 cm in size at 2 years of follow-up. This case demonstrates that nodular fasciitis with myxoid histology can change to that with fibrous appearance gradually with time, thus bringing about spontaneous regression. Degeneration may be involved in the spontaneous regression of nodular fasciitis with myxoid appearance. The mechanism of regression, unclarified at present, should be further studied. PMID:18650753

  14. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  15. Prediction of dynamical systems by symbolic regression.

    PubMed

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast. PMID:27575130

  16. Assumptions of Multiple Regression: Correcting Two Misconceptions

    ERIC Educational Resources Information Center

    Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason

    2013-01-01

    In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…

  17. Method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  18. Multiple Regression Analysis and Automatic Interaction Detection.

    ERIC Educational Resources Information Center

    Koplyay, Janos B.

    The Automatic Interaction Detector (AID) is discussed as to its usefulness in multiple regression analysis. The algorithm of AID-4 is a reversal of the model building process; it starts with the ultimate restricted model, namely, the whole group as a unit. By a unique splitting process maximizing the between sum of squares for the categories of…

  19. A Spline Regression Model for Latent Variables

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  20. Regression Segmentation for M³ Spinal Images.

    PubMed

    Wang, Zhijie; Zhen, Xiantong; Tay, KengYeow; Osman, Said; Romano, Walter; Li, Shuo

    2015-08-01

    Clinical routine often requires to analyze spinal images of multiple anatomic structures in multiple anatomic planes from multiple imaging modalities (M(3)). Unfortunately, existing methods for segmenting spinal images are still limited to one specific structure, in one specific plane or from one specific modality (S(3)). In this paper, we propose a novel approach, Regression Segmentation, that is for the first time able to segment M(3) spinal images in one single unified framework. This approach formulates the segmentation task innovatively as a boundary regression problem: modeling a highly nonlinear mapping function from substantially diverse M(3) images directly to desired object boundaries. Leveraging the advancement of sparse kernel machines, regression segmentation is fulfilled by a multi-dimensional support vector regressor (MSVR) which operates in an implicit, high dimensional feature space where M(3) diversity and specificity can be systematically categorized, extracted, and handled. The proposed regression segmentation approach was thoroughly tested on images from 113 clinical subjects including both disc and vertebral structures, in both sagittal and axial planes, and from both MRI and CT modalities. The overall result reaches a high dice similarity index (DSI) 0.912 and a low boundary distance (BD) 0.928 mm. With our unified and expendable framework, an efficient clinical tool for M(3) spinal image segmentation can be easily achieved, and will substantially benefit the diagnosis and treatment of spinal diseases.

  1. Prediction of dynamical systems by symbolic regression

    NASA Astrophysics Data System (ADS)

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K.; Noack, Bernd R.

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  2. Using Regression Analysis: A Guided Tour.

    ERIC Educational Resources Information Center

    Shelton, Fred Ames

    1987-01-01

    Discusses the use and interpretation of multiple regression analysis with computer programs and presents a flow chart of the process. A general explanation of the flow chart is provided, followed by an example showing the development of a linear equation which could be used in estimating manufacturing overhead cost. (Author/LRW)

  3. Genetic Programming Transforms in Linear Regression Situations

    NASA Astrophysics Data System (ADS)

    Castillo, Flor; Kordon, Arthur; Villa, Carlos

    The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.

  4. The M Word: Multicollinearity in Multiple Regression.

    ERIC Educational Resources Information Center

    Morrow-Howell, Nancy

    1994-01-01

    Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…

  5. Revisiting Regression in Autism: Heller's "Dementia Infantilis"

    ERIC Educational Resources Information Center

    Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin

    2013-01-01

    Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…

  6. Prediction of dynamical systems by symbolic regression.

    PubMed

    Quade, Markus; Abel, Markus; Shafi, Kamran; Niven, Robert K; Noack, Bernd R

    2016-07-01

    We study the modeling and prediction of dynamical systems based on conventional models derived from measurements. Such algorithms are highly desirable in situations where the underlying dynamics are hard to model from physical principles or simplified models need to be found. We focus on symbolic regression methods as a part of machine learning. These algorithms are capable of learning an analytically tractable model from data, a highly valuable property. Symbolic regression methods can be considered as generalized regression methods. We investigate two particular algorithms, the so-called fast function extraction which is a generalized linear regression algorithm, and genetic programming which is a very general method. Both are able to combine functions in a certain way such that a good model for the prediction of the temporal evolution of a dynamical system can be identified. We illustrate the algorithms by finding a prediction for the evolution of a harmonic oscillator based on measurements, by detecting an arriving front in an excitable system, and as a real-world application, the prediction of solar power production based on energy production observations at a given site together with the weather forecast.

  7. Design Coding and Interpretation in Multiple Regression.

    ERIC Educational Resources Information Center

    Lunneborg, Clifford E.

    The multiple regression or general linear model (GLM) is a parameter estimation and hypothesis testing model which encompasses and approaches the more familiar fixed effects analysis of variance (ANOVA). The transition from ANOVA to GLM is accomplished, roughly, by coding treatment level or group membership to produce a set of predictor or…

  8. Predicting Social Trust with Binary Logistic Regression

    ERIC Educational Resources Information Center

    Adwere-Boamah, Joseph; Hufstedler, Shirley

    2015-01-01

    This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…

  9. College student engaging in cyberbullying victimization: cognitive appraisals, coping strategies, and psychological adjustments.

    PubMed

    Na, Hyunjoo; Dancy, Barbara L; Park, Chang

    2015-06-01

    The study's purpose was to explore whether frequency of cyberbullying victimization, cognitive appraisals, and coping strategies were associated with psychological adjustments among college student cyberbullying victims. A convenience sample of 121 students completed questionnaires. Linear regression analyses found frequency of cyberbullying victimization, cognitive appraisals, and coping strategies respectively explained 30%, 30%, and 27% of the variance in depression, anxiety, and self-esteem. Frequency of cyberbullying victimization and approach and avoidance coping strategies were associated with psychological adjustments, with avoidance coping strategies being associated with all three psychological adjustments. Interventions should focus on teaching cyberbullying victims to not use avoidance coping strategies. PMID:26001714

  10. Covariate-adjusted response-adaptive designs for binary response.

    PubMed

    Rosenberger, W F; Vidyashankar, A N; Agarwal, D K

    2001-11-01

    An adaptive allocation design for phase III clinical trials that incorporates covariates is described. The allocation scheme maps the covariate-adjusted odds ratio from a logistic regression model onto [0, 1]. Simulations assume that both staggered entry and time to response are random and follow a known probability distribution that can depend on the treatment assigned, the patient's response, a covariate, or a time trend. Confidence intervals on the covariate-adjusted odds ratio is slightly anticonservative for the adaptive design under the null hypothesis, but power is similar to equal allocation under various alternatives for n = 200. For similar power, the net savings in terms of expected number of treatment failures is modest, but enough to make this design attractive for certain studies where known covariates are expected to be important and stratification is not desired, and treatment failures have a high ethical cost.

  11. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination.

  12. Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Volden, Thomas R.

    2012-01-01

    An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.

  13. Evaluation of syngas production unit cost of bio-gasification facility using regression analysis techniques

    SciTech Connect

    Deng, Yangyang; Parajuli, Prem B.

    2011-08-10

    Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysis techniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysis technique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysis techniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.

  14. Predictors of sociocultural adjustment among sojourning Malaysian students in Britain.

    PubMed

    Swami, Viren

    2009-08-01

    The process of cross-cultural migration may be particularly difficult for students travelling overseas for further or higher education, especially where qualitative differences exist between the home and host nations. The present study examined the sociocultural adjustment of sojourning Malaysian students in Britain. Eighty-one Malay and 110 Chinese students enrolled in various courses answered a self-report questionnaire that examined various aspects of sociocultural adjustment. A series of one-way analyses of variance showed that Malay participants experienced poorer sociocultural adjustment in comparison with their Chinese counterparts. They were also less likely than Chinese students to have contact with co-nationals and host nationals, more likely to perceive their actual experience in Britain as worse than they had expected, and more likely to perceive greater cultural distance and greater discrimination. The results of regression analyses showed that, for Malay participants, perceived discrimination accounted for the greatest proportion of variance in sociocultural adjustment (73%), followed by English language proficiency (10%) and contact with host nationals (4%). For Chinese participants, English language proficiency was the strongest predictor of sociocultural adjustment (54%), followed by expectations of life in Britain (18%) and contact with host nationals (3%). By contrast, participants' sex, age, and length of residence failed to emerge as significant predictors for either ethnic group. Possible explanations for this pattern of findings are discussed, including the effects of Islamophobia on Malay-Muslims in Britain, possible socioeconomic differences between Malay and Chinese students, and personality differences between the two ethnic groups. The results are further discussed in relation to practical steps that can be taken to improve the sociocultural adjustment of sojourning students in Britain. PMID:22029555

  15. Logistic models--an odd(s) kind of regression.

    PubMed

    Jupiter, Daniel C

    2013-01-01

    The logistic regression model bears some similarity to the multivariable linear regression with which we are familiar. However, the differences are great enough to warrant a discussion of the need for and interpretation of logistic regression.

  16. Trace element partition coefficient in ionic crystals.

    PubMed

    Nagasawa, H

    1966-05-01

    Partition coefficient monovalent trace ions between liquids and either solid NaNO(2) or KCl were determined. The isotropic elastic model of ionic crystals was used for calculating the energy change caused by the ionic substitutions. The observed values of partition coefficients in KCl good agreement with calculate values.

  17. Coefficient Alpha and Reliability of Scale Scores

    ERIC Educational Resources Information Center

    Almehrizi, Rashid S.

    2013-01-01

    The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…

  18. Commentary on Coefficient Alpha: A Cautionary Tale

    ERIC Educational Resources Information Center

    Green, Samuel B.; Yang, Yanyun

    2009-01-01

    The general use of coefficient alpha to assess reliability should be discouraged on a number of grounds. The assumptions underlying coefficient alpha are unlikely to hold in practice, and violation of these assumptions can result in nontrivial negative or positive bias. Structural equation modeling was discussed as an informative process both to…

  19. Implications of NGA for NEHRP site coefficients

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2012-01-01

    Three proposals are provided to update tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures (7-10), by the American Society of Civil Engineers (2010) (ASCE/SEI 7-10), with site coefficients implied directly by NGA (Next Generation Attenuation) ground motion prediction equations (GMPEs). Proposals include a recommendation to use straight-line interpolation to infer site coefficients at intermediate values of ̅vs (average shear velocity). Site coefficients are recommended to ensure consistency with ASCE/SEI 7-10 MCER (Maximum Considered Earthquake) seismic-design maps and simplified site-specific design spectra procedures requiring site classes with associated tabulated site coefficients and a reference site class with unity site coefficients. Recommended site coefficients are confirmed by independent observations of average site amplification coefficients inferred with respect to an average ground condition consistent with that used for the MCER maps. The NGA coefficients recommended for consideration are implied directly by the NGA GMPEs and do not require introduction of additional models.

  20. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…